The best use case I can think of for “A.I” is an absolute PRIVACY NIGHTMARE (so set that aside for a moment) but I think its the absolute best example.
Traffic and traffic lights. If every set of lights had cameras to track licence plates, cross reference home addresses and travel times for regular trips for literally every vehicle on the road. Variable speed limit signs on major roads and an unbiased “A.I” whose one goal is to make everyones regular trips take as short an amount of time as possible by controlling everything.
If you can make 1,000,000 cars make their trips 5% more efficiently thats like 50,000 cars worth of emisions. Not to mention real world time savings for people.
show your work. 1 especially seems suspect. Especially since many AIs are not trained on content like you are imagining, but rather trains itself through experimentation and adversarial networks.
You’re just saying, human-written software can have bugs.
That’s pretty much exactly the point they’re making. Humans create the training data. Humans aren’t perfect, and therefore the AI training data cannot be perfect. The AI will always make mistakes and have biases as long as it’s being trained on human data.
The best use case I can think of for “A.I” is an absolute PRIVACY NIGHTMARE (so set that aside for a moment) but I think its the absolute best example.
Traffic and traffic lights. If every set of lights had cameras to track licence plates, cross reference home addresses and travel times for regular trips for literally every vehicle on the road. Variable speed limit signs on major roads and an unbiased “A.I” whose one goal is to make everyones regular trips take as short an amount of time as possible by controlling everything.
If you can make 1,000,000 cars make their trips 5% more efficiently thats like 50,000 cars worth of emisions. Not to mention real world time savings for people.
show your work. 1 especially seems suspect. Especially since many AIs are not trained on content like you are imagining, but rather trains itself through experimentation and adversarial networks.
Even how it trains itself can be biased based on what its instructions are.
Yes, and? If you write a bad fitness function, you get an AI that doesn’t do what you want. You’re just saying, human-written software can have bugs.
That’s pretty much exactly the point they’re making. Humans create the training data. Humans aren’t perfect, and therefore the AI training data cannot be perfect. The AI will always make mistakes and have biases as long as it’s being trained on human data.