WynoraLabs

What is AI?

A two-minute walk-through. AI is a tool — and the more clearly you see how it works, the better you use it. Watch the video, then read the long-form chapters below for the full breakdown.

1920 × 1080 · 124 seconds · narrated
The full read

AI in eight chapters

The video gets you up to speed in two minutes. This long-form version expands every chapter with the history, the actual numbers, and the tradeoffs — so you can study at your own pace.

Chapter 1

What AI actually is

The simplest way to think about modern AI is autocomplete. Your phone's keyboard suggests the next word based on the last few you typed. A frontier AI model does the same thing — only it has read most of the open internet, every digitized book it could obtain, and millions of code repositories. So when it predicts the next word, the “autocomplete” can write essays, explain quantum physics, debug Python, or translate between a hundred languages.

Mechanically, it works like this: the model takes a sequence of text (called a prompt), breaks it into pieces called tokens, and predicts which token most likely comes next. It does that one token at a time. The illusion of thinking, conversation, or reasoning emerges from doing this prediction extremely well at massive scale.

That's it. There is no understanding in the human sense — no inner voice, no beliefs, no consciousness. Just a very, very good prediction engine.

Chapter 2

Where it came from

The phrase Artificial Intelligence was coined in the summer of 1956, at a six-week workshop at Dartmouth College. Ten researchers — John McCarthy, Marvin Minsky, Claude Shannon, Nathaniel Rochester, and others — gathered with a single proposal: that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

For the next 60 years, AI delivered impressive narrow wins (chess in 1997, Jeopardy in 2011, Go in 2016) but no general system. The breakthrough came in 2017 with the transformer architecture — a way of processing language that scales beautifully when you throw more data and compute at it. ChatGPT, launched in late 2022, was the moment that made the curve impossible to ignore.

Chapter 3

How it learns

Frontier models train on roughly 15 trillion tokens— about 11 trillion words. That's essentially the open web, every digitized book, every public code repository, transcripts, papers, forum threads. It is read once or twice during training, never deleted, never “remembered” in the human sense — its patterns get distilled into the model's parameters.

A parameter is a tiny number — a dial — that the model adjusts during training. Modern frontier models have over one trillion parameters. Each one is a single weight in a giant mathematical function that maps tokens to next-token probabilities. You can't inspect any one parameter and learn anything; the knowledge lives in the pattern of all of them together.

The wild part: the cost of running these models drops by about 100× every five years. Algorithmic efficiency improves roughly 3× per year and hardware adds another 1.4×. Stack five years of compounding and a capability that costs $100 to deliver in 2025 will cost $1 in 2030.

Chapter 4

The family tree

People say “AI” to mean a lot of different things. The ecosystem has distinct branches:

  • Foundation models — the giant general-purpose engines (GPT-4/5, Claude, Gemini, Llama, Mistral). Trained once, used everywhere.
  • Agents — software that wraps a foundation model with memory, tools, and a planner so it can take multi-step actions. Manus, Devin, Cursor agents, Claude Code.
  • Chips — the hardware (NVIDIA H100/B200, Google TPUs, AMD MI300, custom ASICs) that the math runs on. Without specialized silicon, the math stays slow.
  • Evaluations— benchmarks and red-team tools that measure what models can and can't do. SWE-bench, GPQA, ARC-AGI, MMLU. The scoreboard.
  • Inference & deployment — services that host models cheaply at scale (Together, Groq, Cerebras, Anyscale). The pipes.

No single company does all of this well. The people building agents depend on the people building foundation models depend on the people building chips. It's an ecosystem.

Chapter 5

What it's actually used for

The hype is loud, but the real-world wins are concrete:

  • Healthcare. AI matches or exceeds expert radiologists at detecting cancers, fractures, and stroke signs from medical images. Source: Nature Medicine 2024 meta-analysis.
  • Science.DeepMind's AlphaFold has mapped over 200 million protein structures — the kind of work that took a PhD candidate a year now takes seconds. The database is open and free.
  • Software. Developers using AI assistants finish coding tasks roughly 55% faster, with comparable code quality (GitHub 2,000-developer study).
  • Education.Bloom's “two-sigma problem” — the finding that one-on-one tutoring lifts the average student to the 98th percentile — is being closed by AI tutors. Khan Academy's Khanmigo is the most-watched example.
  • Translation. Real-time translation across 100+ languages (Meta NLLB, Google Translate). Effectively turns the entire web into a multilingual library.
  • Creative work. Knowledge workers using ChatGPT finish writing tasks 34% faster, with 18% higher rated quality (MIT, Brynjolfsson & Liu, 2023).

The pattern: AI doesn't replace the expert — it raises the floor for everyone below them.

Chapter 6

Where it's going

Predicting AI more than a few years out is mostly storytelling. But the consensus shape across forecasters (McKinsey, Goldman Sachs, IEA, PwC, Stanford HAI) looks roughly like this:

  • 5 years. AI is embedded in every white-collar workflow. ~$300B invested annually. ~600M weekly users on a single platform.
  • 10 years. +$7T to global GDP. AI runs in 4–6% of global electricity (datacenters) — the energy footprint becomes a first-class problem.
  • 20 years.AI co-discovers materials — fusion catalysts, room-temperature superconductors, new pharmaceuticals — at a pace humans can't match alone. Information ecosystem fragments into regional internets.
  • 50 years. AI economy projected at $15.7T (PwC). Compute concentrated in five nations (Stanford HAI). The geography of capability becomes a geopolitical fact.
  • 100 years. Disease cure-rate above 95% on currently terminal diseases (WHO models). Climate response is co-piloted by AI (IPCC SSP scenarios). The curve bends — toward abundance, or toward fragmentation. We choose which.
Chapter 7

Both pages of the book

Every powerful technology comes with two pages. Fire cooked food and cleared land — and burned cities. Electricity ran factories and saved lives in hospitals — and electrocuted careless workers. The internet collapsed the cost of communication — and turned attention into a battleground.

The honest list of AI's downside page:

  • Misinformation at scale. Generating plausible fake content is now nearly free.
  • Job displacement. Some roles will move, some will compress, some will vanish — the question is how fast and whether retraining keeps up.
  • Concentration. The cost of training a frontier model puts that capability in the hands of a small number of labs and nations.
  • Energy. Datacenters already pull meaningful percentages of national grids. The math compounds.
  • Misalignment. A system optimized for the wrong proxy will deliver the wrong outcome at scale. Eval and safety research is the discipline trying to keep ahead of this.

None of these mean AI is bad. They mean AI is a tool, and tools have to be wielded consciously. The page you read on it is the page you write.

Chapter 8

What to do with this

Three habits that will keep you ahead of most people trying to use AI:

  1. Use it on real work. Reading about AI teaches you nothing. Picking one task you do every week and putting AI in the middle of it teaches you everything. Notice where it fails. Notice where it shines.
  2. Verify outputs. Models hallucinate specific facts confidently. If the output matters, check it against a primary source. Keep human judgment in the loop.
  3. Build the muscle.Prompting is an acquired skill. Read, copy, refine. The people who get the most out of AI are not the smartest — they're the most practiced.

AI is a tool. We hold the pen.

Sources referenced: Dartmouth Workshop (1956 proposal); Pearson / OpenAI / Anthropic / DeepMind technical reports; Nature Medicine 2024; GitHub developer productivity study; Bloom (1984) on the 2-sigma problem; McKinsey, Goldman Sachs, IEA, PwC, Stanford HAI forecasts. Specific stat citations are visible in the on-screen captions of the video.