Loading
The vast majority on Reddit are cheering for the coming of AGI and mass layoffs. Contradicting what I hear in the streets in my part of the world. OK, I’ll bite.
How do you tackle trust? Right now a majority of leaders don’t trust the output of AI and require a human judgement to be performed in the workflow. I do a similar thing in my AI generation workflow. It’s good most of the time, but sometimes it’s a seventh level of hell f’d up. And the AI approved it.
Fast forward ten years. AI is in the hospital wards for newborns. Faculty will want doctors and nursese to have an override button. Why? Because the AI will occasionally get it wrong, horribly wrong. It has zero concept of human suffering. Even if you set goals for reinforcement training to maximize human happiness, AI has always been shown to maximize the goal and not the inherent human value.
Benchmarks are good for specific benchmarks, not trust. Ground truth is great for training, but even the 99% models I use produce the wildest fucked up outputs. It only takes one massive blunder to sink a corporation. So how do you propose leaders fully trust fall on to AI?
I await the enlightening my inbox will get.
submitted by /u/redpandafire
[link] [comments]