Loading
I feel like many people are focusing on the philosophical elements separating artificial intelligence from real intelligence. Or how we can evaluate how smart an AI is vs a human. I don’t believe AI needs to feel, taste, touch or even understand. It does not need to have consciousness to assist us in most tasks. What it needs is to assign positive or negative values. It will be obvious that I’m not a programmer, but here’s how I see it :
Let’s say I’m doing a paint job. All defects have a negative value : drips, fisheyes, surface contaminants, overspray etc. Smoothness, uniformity, good coverage, luster have positive values. AI does not need to have a sentient sense of aesthetics to know that drips = unwanted outcome. In fact, I can’t see an AI ever “knowing” anything of the sort. Even as a text model only, you can feed it accounts of people’s experiences, and it will find negative value words associated with them : frustration, disappointment, anger, unwanted expenses, extra work, etc. Drips = bad
What it does have is instant access to all the paint data sheets, all the manufacturer’s recommended settings, spray distance, effects of moisture and temperature, etc. Science papers, accounts from paints chemists, patents and so on. It will then use this data to increase the odds that the user will have “positive values” outcomes. Feed it the good values, and it will tell you what the problem is. I think we’re almost advanced enough that a picture would do (?)
A painter AI could self-correct easily without needing to feel pride or a sense of accomplishment, (or frustration) by simply comparing his work versus the ideal result and pulling from a database of corrective measures. It could be a supervisor to a human worker. A robot arm driven by AI could hold your hand and teach you the right speed, distance, angle, etc. It can give feedback. It can even give encouragement. It might now be economically viable compared to an experienced human teacher, but I’m convinced it’s already being done or could be. A robot teacher can train people 24/7.
In the same way, a cooking AI can use ratings from human testers to determine the overall best seasoning combo, without ever having the experience of taste, or experiencing the pleasure of a good meal.
Does this make sense to anyone else ?
submitted by /u/IcyThingsAllTheTime
[link] [comments]