Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, December 22, 2024

What just happened: A transformative month rewrites the capabilities of AI

Ethan Mollick
One Useful Thing Substack
Originally posted 19 Dec 2024

The last month has transformed the state of AI, with the pace picking up dramatically in just the last week. AI labs have unleashed a flood of new products - some revolutionary, others incremental - making it hard for anyone to keep up. Several of these changes are, I believe, genuine breakthroughs that will reshape AI's (and maybe our) future. Here is where we now stand:

Smart AIs are now everywhere

At the end of last year, there was only one publicly available GPT-4/Gen2 class model, and that was GPT-4. Now there are between six and ten such models, and some of them are open weights, which means they are free for anyone to use or modify. From the US we have OpenAI’s GPT-4o, Anthropic’s Claude Sonnet 3.5, Google’s Gemini 1.5, the open Llama 3.2 from Meta, Elon Musk’s Grok 2, and Amazon’s new Nova. Chinese companies have released three open multi-lingual models that appear to have GPT-4 class performance, notably Alibaba’s Qwen, R1’s DeepSeek, and 01.ai’s Yi. Europe has a lone entrant in the space, France’s Mistral. What this word salad of confusing names means is that building capable AIs did not involve some magical formula only OpenAI had, but was available to companies with computer science talent and the ability to get the chips and power needed to train a model.


Here are some thoughts:

The rapid advancements described in the article underscore the critical need for ethics in the development and deployment of AI. With GPT-4-level models becoming widely accessible and capable of running on personal devices, the democratization of AI technology presents both opportunities and risks. Open-source contributions and global participation enhance innovation but also increase the potential for misuse or unintended consequences. As Gen3 models introduce advanced reasoning capabilities, the possibility of AI being applied in ways that could harm individuals or exacerbate inequalities becomes a pressing concern.

The role of AI as a co-researcher further highlights ethical considerations. Models like o1 and o1-pro can detect errors and solve complex problems, but their outputs require expert evaluation to ensure accuracy. This reliance on human oversight reveals the risks of overdependence on AI without critical scrutiny. Additionally, as multimodal capabilities enable AI to engage with users in more immersive ways, ethical questions arise about privacy, consent, and the potential for misuse in surveillance or manipulation.

Finally, the transformative potential of AI-generated media, such as high-quality videos from tools like Veo 2, emphasizes the need for ethical frameworks to prevent misinformation, copyright violations, or exploitation in creative industries. The article makes it clear that while these advancements bring significant benefits, they demand thoughtful, proactive engagement to ensure AI serves humanity responsibly and equitably. Ethics are essential to guiding this technology toward positive outcomes while mitigating harm.