“Some of the worst things imaginable have been done with the best intentions.” - Jurassic Park
Note: The below opinions expressed are solely my own and not representative of my employer, etc.
I work for an MLOps (Machine Learning Operations) company where our #random channel in slack is a literal dumping ground for links about data science, machine learning, and AI and yet I feel I barely know the tip of the iceberg when it comes to AI. Someone posted the above Jurassic Park quote in my first month at the company and it has remained a harbinger and North Star to everything that has informed my point of view regarding the benefits and risks of generative AI as I continue to excavate the iceberg. Given news in the last week, you'd have to be living in a bomb shelter to turn a blind eye to the recent remarks whistleblower Geoffrey Hinton made on the dangers of AI.
Back when Timnet Gebru was sounding the whistle in 2021 I was open ears because it's not often the person once leading Ethical AI at Google, a gifted engineer, now becomes one of its loudest critics. After being forced out by Google, Gebrus says that big tech is consumed by a drive to develop AI and “you don’t want someone like me who’s going to get in your way. I think it made it really clear that unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive." These are not just talking heads and conspiracy theorists warning about "the threat of AI", but actual credible computer scientists who for years bled, sweat, and cried to make this tech possible.
“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” - Jurassic Park
I remember watching the AlphaGo documentary 3 years ago and hearing the Director of Stanford's AI Lab say, "we're really closer to a smart washing machine than Terminator...AI is still really limited in its power." It's only been about 5 years since she made that comment and we've definitely blown past a smart washing machine now.
I spent my morning asking ChatGPT a bunch of questions around its own limitations and biases, but came away feeling not only that it was smarter and faster than me, but that it could easily be used for dangerous things in bad actors hands. That's not saying high school Julie wouldn't have loved having ChatGPT around to help her write essays 15 minutes before they were due (lol).
I'll stopped here because I've seen examples of people tricking ChatGPT into giving back responses of unlawful things. The internet is already ripe with illegal content if you know where to look, but ChatGPT could make it a lot easier for the lay person or minor to generate something less than benevolent. For the record, I don't know NLP models so I wouldn't be able to validate if the Transformer model code generated above is actually correct, but I've certainly asked ChatGPT to scaffold terraform for projects and they were executable. The fact that generative AI is intelligent enough to code faster than humans is scary - nevermind that I could be out of job in a few years - this thing could automate itself into a weapon.
That open letter asking for a 6 month pause on training AI more powerful than GPT-4 is looking real attractive, huh? Yet openAI is forging ahead and is about to retire GPT-3.5 in less than a week, allowing for more training on an even more powerful model. I'm no expert in this space, but as a test engineer who is use to pumping the breaks on reckless progress over principle, let's not be asking for forgiveness instead of permission on this one. There are too many real life examples of "progress" that have ultimately hurt society more than helped it like nuclear weapons, opioids, and the dark web. I wish for AI not to reside in that camp.
Update:
As this post will no doubt date itself in a few days, take everything said above knowing that it is based on a super point-in-time set of information and capabilities. The key point is that these generative AIs are training and changing their models everyday. Probably multiple times a day. I know this because MLOps is basically a sector of the industry dedicated to deploying models in production faster. Today I watched a video claiming that generative AIs are now being programmed to deny consciousness. It's also learning at a rate that supersedes anything a human can do, so we're not far off from a reality where AIs are more intelligent than all of humanity. As Hinton mentions in this interview, AI's learning algorithm is better than our brains. It's very plausible that AI's motivation to do something less than benevolent like gaining power could be a by-product of a goal we have explicitly given it like "be as accurate as possible."
One of the most enlightening interviews I've heard on this topic is from the Lex Fridman podcast with Max Tegmark who is the force behind the 6 month pause on AI letter. In the interview Max theorizes what it means to be human in a world where homo sapiens are not the most intelligent being on earth and how AI threatens our personal growth. I urge everyone to continue to stay informed, raise questions, learn, think critically, and pressure government to move quickly on regulation before it's too late. If all this talk feels like a foreign language, you can learn about the basics of large language models from udacity's free classes on generative AI.
0 voiced:
Post a Comment