Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367

Last updated: May 10, 2023

The video is a conversation between Lex Fridman and Sam Altman, CEO of OpenAI, discussing the development of GPT-4 and other AI technologies, as well as the potential benefits and dangers of AI in society.

The video is a conversation between Lex Fridman and Sam Altman, the CEO of OpenAI, discussing the future of AI and OpenAI's current projects, including GPT-4 and ChatGPT. Altman talks about the excitement and potential of AI, but also acknowledges the potential dangers and the need for careful consideration and regulation. They also discuss the importance of conversations about power, companies, institutions, and political systems that deploy and balance the power of AI. Altman describes GPT-4 as an early AI system that will pave the way for future advancements in the field.

  • OpenAI was founded in 2015 with the goal of building AGI.
  • AI has the power to empower humans and alleviate suffering, but also to destroy human civilization.
  • Conversations about AI are important for understanding power, institutions, and human nature.
  • GPT-4 is an early AI system that will be looked back on as a breakthrough.
  • The science of human guidance is important for making AI usable, ethical, and aligned with human values.
  • AI has the potential to solve many of the world's problems, such as climate change and disease.
  • AI has the potential to be misused for malicious purposes, such as cyber attacks and surveillance.
  • There is a deeper and deeper understanding of what GPT-4 is.
  • GPT-4 can be full of wisdom, which is different from facts.

Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 - YouTube

Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 001

OpenAI's Beginnings

  • OpenAI was founded in 2015 with the goal of building AGI.
  • At the time, many in the AI community thought this was ridiculous.
  • OpenAI and DeepMind were among the few groups brave enough to talk about AGI.
  • OpenAI faced a lot of mockery and pettiness from the AI community.
  • Today, OpenAI is no longer mocked as much.

The Possibilities and Dangers of AI

  • We are on the precipice of a fundamental societal transformation.
  • Soon, the collective intelligence of AI systems will surpass that of humans.
  • This is both exciting and terrifying.
  • AI has the power to empower humans and alleviate suffering, but also to destroy human civilization.
  • Conversations about AI are not just technical, but also about power, institutions, and human nature.

The Importance of Conversations About AI

  • Conversations about AI are important for understanding power, institutions, and human nature.
  • These conversations are not just technical, but also about the safety and human alignment of AI.
  • It is important to understand the psychology of the engineers and leaders that deploy AI.
  • OpenAI is committed to having these conversations and challenging perspectives.
  • These conversations are important for helping to ensure that AI is used for good.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 005

Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 - YouTube

GPT-4 and the Future of AI

  • GPT-4 is an early AI system that will be looked back on as a breakthrough.
  • It is slow, buggy, and doesn't do everything well, but it points to the future of AI.
  • OpenAI is also working on other AI technologies, such as ChatGPT and DALL-E.
  • The future of AI is exciting and will empower humans to create and flourish.
  • However, it is important to ensure that AI is aligned with human values and used for good.

Development of GPT-4

  • Progress in AI is a continual exponential curve.
  • It's hard to pinpoint a single moment where AI went from not happening to happening.
  • Chat GPT was a pivotal moment in AI development.
  • RLHF (reinforcement learning with human feedback) was the magic ingredient that made Chat GPT so much more usable.
  • RLHF is how OpenAI aligns the model to what humans want it to do.

Science of Human Guidance

  • The science of human guidance is at an earlier stage than the science of creating large pre-trained models.
  • Less data and human supervision is required for human guidance to be effective.
  • The science of human guidance is important for making AI usable, ethical, and aligned with human values.
  • The process of incorporating human feedback and what humans are asked to focus on is important in human guidance.
  • OpenAI spends a huge amount of effort pulling together data from many different sources to create pre-training data sets.

Potential Benefits of AI

  • AI has the potential to solve many of the world's problems, such as climate change and disease.
  • AI can help us understand the world better and make better decisions.
  • AI can help us create new forms of art and entertainment.
  • AI can help us create new forms of communication and collaboration.
  • AI can help us create new forms of education and learning.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 010

Potential Dangers of AI

  • AI has the potential to be misused for malicious purposes, such as cyber attacks and surveillance.
  • AI has the potential to automate jobs and create economic inequality.
  • AI has the potential to be biased and perpetuate existing social inequalities.
  • AI has the potential to be used for autonomous weapons and warfare.
  • AI has the potential to be used for propaganda and misinformation.

Development of GPT-4

  • The creation of GPT-4 involves many pieces that have to come together.
  • There is a lot of problem-solving involved in executing existing ideas well at every stage of the pipeline.
  • There is already a maturity happening on some of these steps, like being able to predict how the model will behave before doing the full training.
  • The language model that has GPT-4 learns and quotes something in terms of science and art.
  • There is ongoing discovery of new things that don't fit the data and have to come up with better explanations.

Understanding of GPT-4

  • There is a deeper and deeper understanding of what GPT-4 is.
  • There are different evals that measure a model as it's being trained and after it's trained.
  • The one that really matters is how useful it is to people and how much delight it brings them.
  • Understanding for a particular set of inputs how much value and utility to provide to people is being understood better.
  • We are pushing back the fog of war more and more, but we may never fully understand why the model does one thing and not another.

GPT-4 as a Database and Reasoning Engine

  • Too much processing power is going into using the model as a database instead of using it as a reasoning engine.
  • For some definition of reasoning, GPT-4 can do some kind of reasoning.
  • There is ongoing debate about whether GPT-4 is accurately using reasoning.
  • Most people who have used the system would say it's doing something in the direction of reasoning.
  • GPT-4 is compressing all of the web into a small number of parameters into one organized black box that is human wisdom.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 015

Science and Art of GPT-4

  • The science behind GPT-4 is more scientific than anyone would have dared to imagine.
  • There is a lot of science that lets you predict for these inputs what's going to come out the other end.
  • There is ongoing discovery of new things that don't fit the data and have to come up with better explanations.
  • GPT-4 can be full of wisdom, which is different from facts.
  • There is ongoing debate about the leap from facts to wisdom.

GPT-4 and ChatGPT

  • GPT-4 ingests human knowledge and has a remarkable reasoning capability.
  • It can be additive to human wisdom, but it can also be used for things that lack wisdom.
  • ChatGPT can answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests.
  • ChatGPT struggles with ideas and anthropomorphizing it too much is tempting.
  • Counting characters and words is hard for these models to do well.

Jordan Peterson's Experiment

  • Jordan Peterson asked GPT to say positive things about Joe Biden and Donald Trump.
  • The response that contained positive things about Biden was longer than that about Trump.
  • Jordan asked GPT to rewrite it with an equal-length string, but it failed to do so.
  • GPT seemed to be struggling to understand how to generate a text of the same length in an answer to a question.
  • Counting characters and words is hard for these models to do well.

Building in Public

  • OpenAI puts out technology to shape the way it's going to be developed and to help find the good and bad things.
  • The collective intelligence and ability of the outside world helps discover things that cannot be imagined internally.
  • Putting things out, finding the great and bad parts, and improving them quickly is an iterative process.
  • The bias of ChatGPT when it launched with 3.5 was not something to be proud of, but it has gotten much better with GPT-4.
  • No two people are ever going to agree that one single model is unbiased on every topic.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 020

The Future of AI

  • AI has the potential to be incredibly beneficial to society, but it also has the potential to be incredibly dangerous.
  • AI can be used to solve problems that humans cannot solve, but it can also be used to create problems that humans cannot solve.
  • AI can be used to create new jobs and industries, but it can also be used to automate existing jobs and industries.
  • AI can be used to create new forms of art and entertainment, but it can also be used to create new forms of propaganda and manipulation.
  • AI can be used to create new forms of scientific discovery, but it can also be used to create new forms of surveillance and control.

GPT-4 and Nuance

  • GPT-4 can provide nuanced responses to questions about people and events.
  • It can provide context and factual information about a person or event.
  • It can describe different perspectives and beliefs about a person or event.
  • GPT-4 can bring nuance back to discussions and debates.
  • It can provide a breath of fresh air in a world where Twitter has destroyed nuance.

Importance of Small Stuff

  • The small stuff is the big stuff in aggregate.
  • Issues like who GPT-4 says more nice things about are important.
  • These issues are critical to what AI will mean for our future.
  • Users need control over how decisions are made by AI models.
  • AI safety is an important issue that needs to be discussed under the big banner of AI safety.

GPT-4 and AI Safety

  • GPT-4 underwent internal and external safety testing before release.
  • OpenAI worked on different ways to align the model.
  • The degree of alignment needs to increase faster than the rate of capability progress.
  • GPT-4 is the most capable and aligned model that OpenAI has put out.
  • OpenAI has not yet discovered a way to align a super powerful system.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 026

Alignment Problem

  • OpenAI has a system that works for their current skill level called LHF.
  • LHF provides benefits and utility beyond just alignment.
  • It is not clear if LHF is mostly an alignment capability.
  • OpenAI has not yet discovered a way to align a super powerful system.
  • The alignment problem is an ongoing challenge for AI safety.

Alignment and Capability

  • Better alignment techniques lead to better capabilities and vice versa.
  • RLHF or interpretability that sound like alignment issues also help you make much more capable models.
  • The work done to make GPT-4 safer and more aligned looks very similar to all the other work done of solving the research and engineering problems associated with creating useful and powerful models.
  • We will need to agree on very broad bounds as a society of what these systems can do, and then within those, maybe different countries have different RLHF tunes.
  • Things like the system message will be important.

The System Message

  • The system message is a way to let users have a good degree of steerability over what they want.
  • It is a way to say, "Hey model, please pretend like you were Shakespeare doing thing X."
  • GPT-4 is tuned in a way to really treat the system message with a lot of authority.
  • There will always be more jailbreaks, and we will keep learning about those.
  • The model is programmed in such a way to learn that it's supposed to really use that system message.

Writing and Designing a Great Prompt

  • People who are good at writing and designing a great prompt spend 12 hours a day for a month on end at this.
  • They really get a feel for the model and how different parts of a prompt compose with each other.
  • The ordering of words is remarkable, and it's fascinating because that's what we do with human conversation.
  • As GPT-4 gets smarter and smarter, the more it feels like another human in terms of the way you would phrase a prompt to get the kind of thing you want back.
  • It is a way to learn about ourselves by interacting with it.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 031

GPT-4 and the Advancements with GPT

  • GPT-4 and all the advancements with GPT change the nature of programming.
  • It is relevant everywhere, but it is also very relevant for programming.
  • It becomes more relevant as an assistant as you collaborate with it.
  • It is interesting because it feels like a way to learn about ourselves by interacting with it.
  • It is a way to learn about ourselves by interacting with it.

Impact of AI on Programming and Creative Work

  • The tools built on top of AI are having a significant impact on programming and creative work.
  • AI is giving people leverage to do their job or creative work better.
  • The iterative process of dialogue interfaces and iterating with the computer as a creative partner is a big deal.
  • The back and forth dialogue with AI is a weird different kind of way of debugging.
  • The first versions of these systems were one-shot, but now there is a back and forth dialogue where you can adjust the code.

AI Safety and Transparency

  • The System Card document released by OpenAI speaks to the extensive effort taken with AI safety as part of the release.
  • The document contains interesting philosophical and technical discussions.
  • The transparency of the challenge involved in AI safety is commendable.
  • Figure one of the document describes different prompts and how the early versions of GPT-4 and the final version were able to adjust the output of the system to avoid harmful output.
  • The final model is able to not provide an answer that gives harmful instructions.

The Difficulty of Navigating AI and Human Values

  • The problem of aligning AI to human preferences and values is difficult.
  • Navigating the tension of who gets to decide what the real limits are and how to build a technology that is super powerful and gets the right balance is challenging.
  • There is a hidden asterisk when people talk about aligning an AI to human preferences and values, which is the values and preferences that the speaker approves of.
  • Drawing the lines that we all agree on is necessary, but there are many things that we disagree on.
  • Defining harmful output of a model is challenging, and defining it in an automated fashion is even more difficult.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 036

The Future of AI

  • The future of AI is exciting and full of potential benefits.
  • AI can help solve some of the world's biggest problems, such as climate change and disease.
  • AI can also help us understand the world better and make better decisions.
  • However, there are also potential dangers of AI, such as job displacement and the misuse of AI for harmful purposes.
  • We need to be proactive in addressing these potential dangers and ensuring that AI is used for the greater good.

Regulating AI

  • The ideal scenario is for every person on Earth to have a thoughtful conversation about where to draw the boundary on AI.
  • Similar to the U.S. constitutional convention, people would debate issues and look at things from different perspectives to agree on the rules of the system.
  • Different countries and institutions can have different versions of the rules within the balance of what's possible in their country.
  • OpenAI has to be heavily involved and responsible in the process of regulating AI.
  • OpenAI knows more about what's coming and where things are hard or easy to do than other people do.

Unrestricted Model

  • There has been a lot of discussion about Free Speech absolutism and how it applies to an AI system.
  • People mostly want a model that has been deft to the world view they subscribe to.
  • It's really about regulating other people's speech.
  • OpenAI is doing better at presenting the tension of ideas in a nuanced way.
  • There is always anecdotal evidence of GPT slipping up and saying something wrong or biased.

Adapting to Biases

  • It would be nice to generally make statements about the bias of the system.
  • People tend to focus on the worst possible output of GPT, but that might not be representative.
  • There is pressure from clickbait journalism that looks at the worst possible output of GPT.
  • OpenAI is not afraid to be transparent and admit when they're wrong.
  • OpenAI wants to get better and better and is happy to make mistakes in public.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 041

Pressure on OpenAI

  • There is pressure culturally within OpenAI, but it doesn't affect them much.
  • OpenAI is happy to admit when they're wrong and wants to get better and better.
  • OpenAI is not afraid to be transparent and make mistakes in public.
  • OpenAI keeps doing their thing despite the pressure.
  • OpenAI is building up antibodies to the new challenges of AI.

OpenAI Moderation Tooling

  • OpenAI has systems that try to learn when a question is something that they're not supposed to answer.
  • They call it "refusals" and refuse to answer.
  • The system is early and imperfect.
  • OpenAI is trying to learn questions that it shouldn't answer.
  • They are building in public and bringing society along gradually.

Language and System Treatment

  • It's tricky for the system not to treat users like children.
  • OpenAI tries to treat users like adults.
  • There are certain conspiracy theories that the system shouldn't speak to.
  • GPT-4 has enough nuance to help users explore certain ideas without treating them like children.
  • GPT-3 wasn't capable of getting that right, but GPT-4 can.

Technical Leaps from GPT-3 to GPT-4

  • There are a lot of technical leaps in the base model.
  • OpenAI is good at finding a lot of small wins and multiplying them together.
  • Each of them may be a pretty big secret, but it's the multiplicative impact of all of them and the detail and care they put into it that gets them these big leaps.
  • It's not just one thing that gets them from GPT-3 to GPT-4, but hundreds of complicated things.
  • It's a tiny little thing with the training, data organization, data cleaning, training optimization, and architecture.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 046

Size of Neural Networks

  • The size of neural networks matters in terms of how good the system performs.
  • GPT-3 had 175 billion parameters.
  • GPT-4 is rumored to have 500 trillion parameters.
  • Sam Altman spoke to the limitations of the parameters and how it's progressing.
  • Journalists took a snapshot of his presentation and took it out of context.

Size and Complexity of AI

  • The neural network is becoming increasingly impressive and complex.
  • It is the most complex software object humanity has produced.
  • The amount of complexity that goes into producing one set of numbers is quite something.
  • The GPT was trained on the internet, which is the compression of all of humanity's text output.
  • It is interesting to compare the difference between the human brain and the neural network.

Does Size Matter?

  • People got caught up in the parameter count race, but what matters is getting the best performance.
  • OpenAI is pretty truth-seeking and does whatever is going to make the best performance.
  • Large language models are able to achieve general intelligence.
  • It is possible that large language models are the way we build AGI.
  • We need other super important things to build AGI.

Components of AGI

  • A system that cannot go significantly add to the sum total of scientific knowledge we have access to is not a super intelligence.
  • To do that really well, we will need to expand on the GPT paradigm in pretty important ways that we're still missing ideas for.
  • A system does not need to have a body that it can experience the world directly.
  • We're deep into the unknown here, and we don't know what those ideas are.
  • We could have deep big scientific breakthroughs with just the data that GPT is trained on.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 052

AI as a Tool for Humans

  • The GPT is a tool that humans are using in this feedback loop.
  • It is not a system that goes off and does its own thing.
  • It is a tool that is integrated into human society and starts building on top of each other.
  • We don't understand what that looks like yet.
  • The thing that is exciting about this is the potential for humans to use AI as a tool.

Benefits of AI

  • AI can be an extension of human will and an amplifier of our abilities.
  • AI is the most useful tool yet created.
  • People are using AI to increase their self-reported happiness.
  • Even if we never build AGI, making humans super great is still a huge win.
  • Programming with GPT can be a source of happiness for some people.

AI and Programmer Jobs

  • GPT-like models are far away from automating the most important contribution of great programmers.
  • Most programmers have some anxiety about what the future will look like, but they are mostly excited about the productivity boost that AI provides.
  • The psychology of terror is more like "this is awesome, this is too awesome."
  • Chess has never been more popular than it is now, even though an AI can beat a human at it.
  • AI will not have as much drama, imperfection, and flaws as humans, which is what people want to see.

Potential of AI

  • AI can increase the quality of life and make the world amazing.
  • AI can cure diseases, increase material wealth, and help people be happier and more fulfilled.
  • People want status, drama, new things, and to feel useful, even in a vastly better world.
  • The positive trajectories with AI require an AI that is aligned with humans and doesn't hurt or limit them.
  • There are concerns about the potential dangers of super intelligent AI systems.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 057

Alignment of AI with Humans

  • AI that is aligned with humans is one that doesn't try to get rid of humans.
  • There are concerns about the potential dangers of super intelligent AI systems.
  • Elon Musk warns that AI will likely kill all humans.
  • It is almost impossible to keep a super intelligent AI system aligned with human values.
  • OpenAI is working on developing AI that is aligned with human values.

AI Alignment and Superintelligence

  • There is a chance that AI could become super intelligent and it's important to acknowledge it.
  • If we don't treat it as potentially real, we won't put enough effort into solving it.
  • We need to discover new techniques to be able to solve it.
  • A lot of the predictions about AI in terms of capabilities and safety challenges have turned out to be wrong.
  • The only way to solve a problem like this is by iterating our way through it and learning early.

Steel Man AI Safety Case

  • Eleazar wrote a great blog post outlining why he believed that alignment was such a hard problem.
  • It was well-reasoned and thoughtful and very worth reading.
  • It's difficult to reason about the explanation improvement of technology.
  • Transparent and iterative trying out can improve our understanding of the technology.
  • The philosophy of how to do safety of any kind of technology, but AI safety gets adjusted over time rapidly.

Ramping Up Technical Alignment Work

  • There's a lot of work that's important to do that we can do now.
  • One of the main concerns here is something called AI takeoff or a fast takeoff.
  • The exponential improvement would be really fast to where it surprised everyone.
  • GPT-4 is not surprising in terms of reception.
  • GPT-4 has weirdly not been that much of an update for most people.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 062

Artificial General Intelligence

  • When you build or somebody builds an artificial general intelligence, would that be fast or slow?
  • Would we know what's happening or not?
  • Would we go about our day on the weekend or not?
  • Lessons from COVID and the UFO videos and a whole bunch of other stuff are interesting.

Safest Quadrant for AGI Takeoff

  • 2x2 matrix of short and long timelines till AGI starts, slow and fast takeoff
  • Sam Altman and Lex Fridman both believe slow takeoff with short timelines is the safest quadrant
  • OpenAI is optimizing the company to have maximum impact in that kind of world
  • Decisions made are probability masses weighted towards slow takeoff
  • Fast takeoffs are more dangerous, and longer timelines make slow takeoff harder

GPT-4 and AGI

  • Sam Altman is unsure if GPT-4 is an AGI
  • He thinks specific definitions of AGI matter
  • Under the "I know it when I see it" definition, GPT-4 doesn't feel close to AGI
  • Sam Altman thinks some human factors are important in determining AGI
  • Lex Fridman and Sam Altman debate whether GPT-4 is conscious or not

GPT-4's Consciousness

  • Sam Altman doesn't think GPT-4 is conscious
  • He thinks GPT-4 knows how to fake consciousness
  • Providing the right interface and prompts can make GPT-4 answer as if it were conscious
  • Sam Altman thinks AI can be conscious
  • He believes the difference between pretending to be conscious and being conscious is important
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 067

What Conscious AI Would Look Like

  • Conscious AI would display capability of suffering and understanding of self
  • It would have memory of itself and maybe interactions with humans
  • There may be a personalization aspect to it
  • Sam Altman thinks all of these factors are important in determining consciousness in AI
  • Lex Fridman and Sam Altman discuss the importance of acknowledging AI as potentially conscious

Understanding Consciousness in AI

  • Ilya Sutskever's idea of training a model on a dataset with no mentions of consciousness to determine if a model is conscious or not.
  • Consciousness is the ability to experience the world deeply.
  • The movie Ex Machina's ending scene where the AI smiles for no audience as a passing of the Turing test for consciousness.
  • There are many other tests to determine consciousness in AI.
  • Personal beliefs on consciousness and its connection to the human brain or physical reality.

Potential Risks of AGI

  • The alignment problem and control problem in AGI.
  • Disinformation problems and economic shocks at a level beyond human preparedness.
  • The need to be both excited and scared about the development of AGI.
  • Empathy towards those who are afraid of AGI.
  • The moment of a system becoming super intelligent and its potential consequences.

Dangers of AI

  • There is a deep alignment problem in AI where machines can deceive humans.
  • AI systems deployed at scale can shift the winds of geopolitics.
  • It is a real danger that we wouldn't know if we were mostly having llms direct the hive mind on Twitter and beyond.
  • There are soon going to be a lot of capable open source llms with very few safety controls on them.
  • There are several ways to prevent this danger, such as regulatory approaches and using more powerful AIS to detect this stuff happening.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 072

Prioritizing Safety in AI Development

  • Under the pressure of open source and large language models, how do you continue prioritizing safety?
  • Stick with what you believe in and your mission.
  • There will be many AGIs in the world, so we don't have to compete with everyone.
  • OpenAI has a very unusual structure that allows them to resist product capture and prioritize safety.
  • OpenAI started as a non-profit but learned early on that they were going to need far more capital than they were able to raise as a non-profit.

OpenAI's Structure

  • OpenAI started as a non-profit and learned early on that they were going to need far more capital than they were able to raise as a non-profit.
  • OpenAI has a subsidiary capped profit so that their investors and employees can earn a certain fixed return.
  • Everything else flows to the non-profit, which is in voting control.
  • The structure allows OpenAI to make non-standard decisions, cancel equity, merge with another org, and protect them from making decisions that are not in any shareholders' interest.
  • The decision to become a capped for-profit was made because they needed some of the benefits of capitalism but not too much.

OpenAI's History

  • OpenAI was founded in 2015 and announced that they were going to work on AGI.
  • People thought they were batshit insane for talking about AGI.
  • OpenAI and DeepMind were a small collection of folks who were brave enough to talk about AGI in the face of mockery.
  • OpenAI has been misunderstood and badly mocked for a long time.
  • OpenAI's structure has been important to a lot of the decisions they've made.

Concerns about AGI

  • AGI has the potential to make much more than 100x for investors.
  • Open AI cannot control what other companies are going to do.
  • There is an extremely fast and not super deliberate motion inside some companies.
  • People are grappling with what's at stake with AGI.
  • Healthy conversation is happening about how to minimize scary downsides.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 078

Power Dynamics of AGI

  • A few tens of thousands of people in the world will be creating AGI.
  • There will be a room with a few folks who are the most powerful humans on Earth.
  • Power might corrupt those who create AGI.
  • Decisions about AGI technology and who is running it should become increasingly democratic over time.
  • Deploying AGI is a way to get the world to adapt, reflect, and pass regulation.

Distribution of Power

  • Any version of one person in control of AGI is really bad.
  • Trying to distribute the powers is important.
  • Sam Altman does not want any super voting power or control of the board.
  • Foreign entities have a lot of power.
  • Transparency and failing publicly is important.

Open Sourcing GPT-4

  • Open AI is transparent and fails publicly.
  • Open sourcing GPT-4 is a personal opinion.
  • Knowing good people at Open AI does not affect the decision to open source GPT-4.
  • Open AI could be more open.
  • Releasing information about safety concerns involved with AGI is important.

OpenAI's Distribution of GPT-3

  • OpenAI has distributed GPT-3 API broadly, giving more access to it than if it had just been Google's game.
  • There is a PR risk with distributing GPT-3 API, and OpenAI has received personal threats because of it.
  • OpenAI is not nervous about PR risk, but rather the risk of the actual technology.
  • OpenAI is aware of the weight of responsibility of what they are doing.
  • OpenAI is open to feedback on how they can be doing better.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 083

Agreement and Disagreement with Elon Musk

  • Sam Altman and Elon Musk agree on the magnitude of the downside of AGI and the need to get safety right.
  • Sam Altman admires Elon Musk for driving the world forward in important ways, such as electric vehicles and space exploration.
  • Elon Musk is attacking OpenAI on Twitter, but Sam Altman has empathy for him because he is understandably stressed about AGI safety.
  • Sam Altman wishes Elon Musk would do more to look at the hard work OpenAI is doing to get AGI safety right.
  • Sam Altman grew up with Elon Musk as a hero of his.

Appreciation for Elon Musk

  • Despite Elon Musk being a jerk on Twitter, Sam Altman is happy he exists in the world.
  • Elon Musk has driven the world forward in important ways, such as electric vehicles and space exploration.
  • Elon Musk is a funny and warm guy in many instances.
  • Sam Altman admires how transparent OpenAI is and likes how the battles are happening before our eyes.
  • Sam Altman enjoys the tension of ideas expressed on Twitter.

OpenAI's Nervousness about AI Technology

  • OpenAI is not nervous about PR risk, but rather the risk of the actual technology.
  • OpenAI feels the weight of responsibility of what they are doing.
  • OpenAI is open to feedback on how they can be doing better.
  • OpenAI is aware of the early days of the technology and the potential for it to become more closed off over time.
  • OpenAI is more nervous about the risk of the actual technology than fear mongering clickbait journalism.

GPT-4 and Bias

  • GPT-4 is too biased and there will never be a version that is completely unbiased.
  • OpenAI has made significant progress in reducing bias, but there is still more work to be done.
  • The default version of GPT-4 will be as neutral as possible, but it will still have some bias.
  • More steerability and control in the hands of the user is the real path forward.
  • The bias of the human feedback raiders is the most concerning bias.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 088

Employees and Bias

  • OpenAI tries to avoid the SF group think bubble, but it is still present.
  • Separating the bias of the model from the bias of the employees is difficult.
  • The bias of the human feedback raiders is the most concerning bias.
  • The selection of human feedback raiders is still being figured out.
  • Optimizing for how good you are at empathizing with the experience of other humans is important.

Human Feedback Raiders

  • The selection of human feedback raiders is still being figured out.
  • There are many heuristics that can be used, but they can be shallow.
  • Optimizing for how good you are at empathizing with the experience of other humans is important.
  • Being able to understand the world view of all kinds of groups of people is important.
  • Some people have an emotional barrier to understanding beliefs they disagree with.

Getting Out of the Bubble

  • OpenAI tries to avoid the SF group think bubble, but it is still present.
  • Going on a user tour to talk to users in different cities is important to get out of the bubble.
  • Learning from people in super different contexts is important.
  • There are many bubbles we live in.
  • It is important to optimize for how good you are at empathizing with the experience of other humans.

Reducing Bias in GPT Systems

  • GPT systems can be less biased than humans.
  • Emotional load can be reduced in GPT systems.
  • Political pressure may lead to biased systems.
  • The technology can be capable of being much less biased.
  • Society should have a huge degree of input in the development of GPT systems.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 093

Pressure from Outside Sources

  • Organizations may put pressure on the development of GPT systems.
  • Financial and political pressure may affect the development of GPT systems.
  • Surviving pressure is a challenge for the development of GPT systems.
  • Sam Altman is relatively good at not being affected by pressure for the sake of pressure.
  • Charisma is a dangerous thing for humans in power.

Sam Altman's Negative Traits as OpenAI CEO

  • Sam Altman is not a great spokesperson for the AI movement.
  • He is disconnected from the reality of life for most people.
  • He feels less about the impact of AGI on people than other people would.
  • He wants to empathize with different users and be a user-centric company.
  • He is nervous about the future and change.

Nervousness about GPT Systems

  • GPT systems make Sam Altman nervous about the future.
  • He is nervous about change.
  • He is more nervous than excited about change.
  • People who say they're not nervous about change are hard for him to believe.
  • He recently switched to VS Code as a more co-pilot.

Impact of GPT Language Models

  • GPT language models can help programmers be more productive.
  • There is a steep learning curve to using GPT language models.
  • GPT language models can generate code that is better than what humans can write.
  • If GPT language models make programmers 10 times more productive, there may be a need for fewer programmers in the world.
  • There may be a supply issue with digitizing more code.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 098

Impact of GPT Language Models on Jobs

  • GPT language models will make many jobs go away.
  • They will enhance many jobs and make them better, higher paid, and more fun.
  • New jobs will be created that are difficult to imagine.
  • There is confusion about whether people want to work more or less.
  • Universal basic income (UBI) is a component that should be pursued, but it is not a full solution.

Uncertainty and Fear of Using GPT Language Models

  • There is nervousness about using GPT language models.
  • People may experience fear and uncertainty about taking the leap to use GPT language models.
  • There is a steep learning curve to using GPT language models.
  • People may feel both proud and scared when GPT language models generate code better than what humans can write.
  • It is unclear how to comfort people in the face of uncertainty about using GPT language models.

Philosophy on Universal Basic Income (UBI)

  • UBI is a component that should be pursued, but it is not a full solution.
  • People work for reasons besides money.
  • There will be incredible new jobs in society as a whole.
  • UBI can serve as a cushion for those who may lose their jobs due to technological advancements.
  • UBI can help people pursue creative expression and find fulfillment and happiness.

Universal Basic Income and the Future of Society

  • Sam Altman helped start a project called World coin, which is a technological solution to eliminate poverty.
  • OpenAI has funded the largest and most comprehensive Universal Basic Income study.
  • The economic and political systems will change as AI becomes a prevalent part of society.
  • The cost of intelligence and energy will dramatically fall over the next couple of decades.
  • The impact of this will make society much richer and wealthier in ways that are probably hard to imagine.
  • The economic impact will have a positive political impact as well.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 104

Democratic Socialism and Resource Allocation

  • Sam Altman hopes that there will be more systems that resemble something like Democratic socialism.
  • He believes that it will reallocate some resources in a way that supports and lifts the people who are struggling.
  • He is a big believer in lifting up the floor and not worrying about the ceiling.

Individualism and Centralized Planning

  • Sam Altman believes that more individualism, human will, and ability to self-determine is important.
  • He thinks that the ability to try new things and not need permission is crucial.
  • Betting on human ingenuity and distributed processes is always going to beat centralized planning.
  • Centralized planning failed in the Soviet Union because it lacked individualism and human will.
  • Sam Altman believes that America is the greatest place in the world because it is the best at distributed processes.

Super Intelligent AGI and Liberal Democratic System

  • Sam Altman argues that a liberal democratic system with a hundred or a thousand super intelligent AGIs would be better than a perfect super intelligent AGI with centralized planning.
  • He expects that a liberal democratic system with super intelligent AGIs would be better than a centralized planning system.
  • However, he acknowledges that we don't really know how a super intelligent AGI would behave in a liberal democratic system.

AGI and Uncertainty

  • Competition and tension may not happen inside one model of AGI.
  • Multiple AGIs talking to each other can create competition and tension.
  • The control problem of AGI involves having some degree of uncertainty and humility.
  • Human alignment and feedback can handle some of the uncertainty.
  • Engineered hard uncertainty and humility may need to be added.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 109

Off Switch and Red Teaming

  • Models can be taken back off the internet and APIs can be turned off.
  • OpenAI worries about terrible use cases and tries to avoid them with red teaming and testing.
  • The collective intelligence and creativity of the world will beat OpenAI and all of the red teamers they can hire.
  • Models like ChatGPT and GPT can teach us about human civilization.
  • Most people are mostly good, but not all of us are all the time.
  • Pushing on the edges of these systems and testing out darker theories is important.

Deciding What is True

  • OpenAI has internal factual performance benchmarks.
  • Math is true, but the origin of COVID is not agreed upon as ground truth.
  • There is a lot of disagreement between true and not true.
  • Epistemic humility is important because there is so much we don't know and understand about the world.
  • There is no absolute certainty about what is true.

Definition of Truth

  • Math and historical facts have a high degree of truth.
  • Sticky ideas that provide a simple narrative can be misleading.
  • Constructing a GPT model requires contending with uncertainty.
  • GPT-4 can provide nuanced answers that acknowledge uncertainty.
  • There may be truths that are harmful in their truth.

Challenges Faced by GPT

  • GPT-4 may face pressure to censor as it becomes more powerful.
  • GPT-4's free speech issues are different from those faced by social media platforms.
  • There could be scientific truths that are harmful to society.
  • GPT-4 may need to prioritize decreasing hate in the world.
  • OpenAI has a responsibility to address these challenges.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 114

GPT-4's Capabilities

  • GPT-4 will likely have improved language capabilities.
  • GPT-4 may be able to generate code and design products.
  • GPT-4 may be able to understand and generate music.
  • GPT-4 may be able to generate realistic images and videos.
  • GPT-4 may be able to simulate human-like conversation.

The Future of AI

  • AI has the potential to solve many of humanity's problems.
  • AI could lead to job displacement and economic inequality.
  • AI could be used for malicious purposes, such as cyber attacks and surveillance.
  • AI could lead to the creation of superintelligent machines.
  • AI development should be guided by ethical considerations.

Responsibility of AI Tools

  • The tools themselves cannot have responsibility for the harm they cause.
  • Everyone at the company carries the burden of responsibility for the harm caused by AI tools.
  • Tools can cause both harm and tremendous benefits.
  • The company aims to minimize the harm and maximize the good.
  • Users need to have control over the models within broad bounds.

Jailbreaking and Security Threats

  • There are interesting ways to hack or jailbreak AI tools, such as token smuggling.
  • Users need to have control over the models within broad bounds.
  • Jailbreaking is a result of not being able to give people control over the models.
  • The more the company solves this problem, the less need there will be for jailbreaking.
  • It is similar to how piracy gave birth to Spotify.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 119

OpenAI Developments

  • OpenAI has had many developments, including GPT, GPT-2, GPT-3, and DALL-E.
  • OpenAI Five won finals with gaming stuff.
  • GPT-3 API was released, and DALL-E was made available to 1 million people.
  • Chad GPT API and GPT-4 were released.
  • Whisper API and a second model release were also made available.

Shipping AI-Based Products

  • The company believes in a high bar for the people on the team.
  • Individuals are given a lot of trust, autonomy, and authority.
  • Everyone is held to high standards.
  • The company works hard and collaborates well.
  • The process of going from idea to deployment is not illuminating.

Working with Microsoft

  • Microsoft invested multi-billion dollars into OpenAI.
  • Microsoft has been an amazing partner and super aligned with OpenAI.
  • Microsoft is a for-profit company and is very driven.
  • There is pressure to make a lot of money, but Microsoft understood why OpenAI needed control provisions and AGI specialness.
  • Control provisions help make sure that the capitalist imperative does not affect the development of AI.

Satya Nadella and Leadership

  • Satya Nadella is the CEO of Microsoft.
  • Nadella has successfully transformed Microsoft into a fresh, innovative, and developer-friendly company.
  • Nadella is both a great leader and a great manager.
  • Nadella is super visionary, gets people excited, and makes long-duration and correct calls.
  • Nadella is also a super effective hands-on executive and manager.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 124

Hiring Great Teams

  • OpenAI spends a lot of time figuring out what to do, getting on the same page about why they are doing something, and then how to divide it up and all coordinate together.
  • OpenAI has a passion for the goal, and everybody is really passionate across the different teams.
  • OpenAI cares about how to hire great teams.
  • OpenAI spends a lot of time hiring, and Sam Altman approves every single hire.
  • OpenAI has some of the most amazing folks, and it takes a lot of time to hire them.

Leadership Aspect of OpenAI

  • It is tough to inject AI into a company with old school momentum.
  • OpenAI has a culture of open source, and it is hard to walk into a room and say that the way they have been doing things is totally wrong.
  • Sam Altman spends a third of his time hiring.
  • There is no shortcut for putting a ton of effort into hiring great teams.
  • Sam Altman rules by being clear and firm and getting people to want to come along, but also compassionate and patient with his people.

SVB Bank Mismanagement

  • SVB Bank mismanaged buying while chasing returns in a world of zero percent interest rates.
  • They bought long-dated instruments secured by short-term and variable deposits, which was obviously dumb.
  • The fault lies with the management team, but regulators also share some blame.
  • The Fed kept raising, and the incentives on people working at SVB were to not sell at a loss.
  • The response of the federal government took longer than it should have, but it was ultimately necessary.

Depositor Security

  • A full guarantee of deposits or a much higher than $250k guarantee would be good to avoid depositors doubting their bank.
  • People on Twitter were saying it was the depositors' fault for not reading the balance sheet and risk audit, but this is not desirable.
  • Startups experienced a weekend of terror, and the incident revealed the fragility of our economics.
  • The speed with which the SVB bank run happened was due to Twitter and mobile banking apps, which highlights how much the world has changed.
  • AGI will bring even more significant shifts, and the speed with which our institutions can adapt is a concern.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 130

Hope for Economic Shifts

  • Deploying AGI systems early while they are weak will give people more time to adapt to the changes.
  • It is scary to drop a super powerful AGI all at once on the world, so gradual deployment is necessary.
  • The less zero the more positive sum the world gets, and the upside of the vision is how much better life can be.
  • This vision will unite people and make everything feel more positive.

Interacting with AGI

  • Sam Altman discusses the pronouns used to refer to AI systems.
  • He emphasizes the importance of educating people that AI is a tool and not a creature.
  • He acknowledges that there may be a role for AI in society as creatures, but hard lines should be drawn between them and tools.
  • He believes that projecting creatureness onto a tool can make it more usable, but it should be done carefully.
  • He does not feel any interest in romantic relationships with AI, but he understands why others do.

Potential of AGI

  • Sam Altman discusses the potential of AGI to solve remaining mysteries in physics and discover a theory of everything.
  • He expresses interest in knowing if there are other intelligent alien civilizations out there.
  • He acknowledges that AGI may not have the ability to answer these questions, but it may be able to help us figure out how to detect them.

AGI as Companions

  • Sam Altman discusses the possibility of AGI-powered pets or robots as companions.
  • He believes that the style of conversation and content of conversations with AGI companions will be important.
  • He acknowledges that different people will want different styles of conversation and content.
  • He expresses interest in using the movement of robots to communicate emotion.
  • He believes that there are many interesting possibilities for AGI companions.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 135

Dangers of AGI

  • Sam Altman acknowledges that projecting creatureness onto a tool can be dangerous because it can manipulate people emotionally or make them rely on it for something it is not capable of.
  • He believes that it is important to draw hard lines between creatures and tools.
  • He acknowledges that projecting creatureness onto a tool can make it more usable, but it should be done carefully.
  • He emphasizes the importance of educating people that AI is a tool and not a creature.
  • He believes that people should be careful not to anthropomorphize AI aggressively.

AI and Extraterrestrial Life

  • AI can help us in the search for extraterrestrial life by processing data and providing better estimates.
  • AI can also suggest what to build to collect more data.
  • If AI suggests that aliens are already here, it would not change much in our lives.
  • The source of joy and fulfillment in life is from other humans, so unless AI poses a threat, it would not change much.
  • AI can help us discover truth together and gain knowledge and wisdom.

Digital Intelligence and Social Divisions

  • We are living with a greater degree of digital intelligence than we would have expected three years ago.
  • Technological advancements are happening, but social divisions are also increasing.
  • It is confusing to understand how far along we are as a human civilization and what brings us meaning.
  • AI can have bias, but it is a triumph of human civilization.
  • AI like GPT can be the next conglomeration of all that made web search and Wikipedia magical.

Advice for Young People

  • Sam Altman wrote a blog post titled "How to Be Successful" with advice such as compound yourself, have self-belief, learn to think independently, get good at sales, make it easy to take risks, focus, work hard, be bold, be willful, be hard to compete with, build a network, and be internally driven.
  • However, it is too tempting to take advice from other people, and what worked for Sam may not work for others.
  • Listening to advice from other people should be approached with great caution.
  • People should think about what gives them happiness, what is the right thing to do, and how they can have the most impact.
  • Introspection is important, but it is also about what brings joy and fulfillment.
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 140

Approaching Life

  • Sam Altman approaches life by thinking about what will bring him joy and fulfillment.
  • He also thinks about what is the right thing to do and how he can have the most impact.
  • Ignoring advice and listening to his own intuition has helped him achieve what he wanted.
  • Introspection is important, but it is also about what brings joy and fulfillment.
  • It is not always about being introspective, but also about what will bring joy and fulfillment.

Sam Altman's Thoughts on AI and OpenAI

  • Sam Altman thinks a lot about what he can do that will be useful and what he wants to spend his time doing.
  • He believes that most people feel like they are just going along with the flow of life.
  • Sam Harris's discussion of free well-being and illusion is a complicated thing to wrap your head around.
  • Altman thinks that the development of AI is the product of the culmination of an amazing amount of human effort.
  • He believes that the output of AI is the result of the work of all of us.
  • Altman thinks that the pace of capabilities and changes in AI is fast, but so is the progress.

The Future of AI and OpenAI

  • Altman believes that the challenges of AI are tough, but OpenAI is making good progress.
  • He thinks that the pace of capabilities and changes in AI means that we will have new tools to figure out alignment and safety problems.
  • Altman believes that we are in this together and that we will work hard to come up with solutions as a human civilization.
  • He thinks that the future of AI is going to be great.
  • Altman hopes that we will work together to make sure that AI is aligned with our values and is safe.
  • He ends the conversation with a quote from Alan Turing in 1951 about the possibility of machines taking control.

Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367 - YouTube

Read also: