Someone tested a 1997 processor with 128MB of RAM—and it can run AI

Artificial intelligence is often portrayed as the domain of cutting-edge GPUs and supercomputers. But a recent experiment has turned that assumption upside down: researchers managed to run a modern AI model on a computer from 1997 with just 128MB of RAM.

A technical feat: modern AI on obsolete hardware

The achievement comes from EXO Labs, founded by AI pioneer Andrej Karpathy, in collaboration with researchers at the University of Oxford. Their test machine was nothing glamorous—an Intel Pentium II clocked at 350 MHz with 128MB of memory. By today’s standards, it’s laughably outdated.

And yet, the team managed to run a language model based on Llama 2 at a pace of 39 tokens per second, using only 260,000 parameters. The breakthrough was made possible thanks to BitNet, a neural network architecture that replaces traditional 32-bit floating-point weights with ternary weights (-1, 0, or 1).

This radical simplification allows enormous compression. A typical 7-billion-parameter model can be shrunk to just 1.38 GB, small enough to run on low-end CPUs without GPUs. EXO Labs claims the same technique could one day allow 100-billion-parameter models to run on a single CPU, approaching processing speeds close to human reading rates.

The result highlights a critical lesson: algorithmic efficiency can matter more than brute hardware power.

Toward inclusive and sustainable AI

Beyond the geeky appeal of seeing a 1990s computer run AI, the implications are profound. One of the biggest barriers to AI adoption worldwide is its cost, both in terms of equipment and energy. If optimized architectures like BitNet can make AI models run smoothly on older or modest hardware, access could spread far beyond elite labs and tech giants.

See also  Looks like a child’s drawing—but 69% of adults fail this powerful IQ test

In developing countries, this could be transformative. Schools, hospitals, and small businesses could use AI for education, diagnostics, or logistics without needing expensive new infrastructure. Instead of leaving entire regions behind, AI could become globally accessible.

There’s also an environmental angle. Repurposing existing hardware instead of manufacturing new high-performance chips would reduce electronic waste and cut the carbon footprint of AI. In a world increasingly focused on sustainability, this approach aligns perfectly with the push toward greener technology.

More with less: a paradigm shift

This experiment is more than a nostalgic stunt. It signals a change in thinking: the future of AI doesn’t have to be about endlessly chasing bigger GPUs and larger server farms. Instead, it may come from smarter, leaner algorithms designed to do more with less.

That shift could give rise to a more responsible, inclusive, and democratized AI—one where the benefits of machine intelligence aren’t reserved for the wealthiest companies and countries, but shared more widely across society.

And it all started with a dusty old Pentium II proving that sometimes, progress isn’t about new machines—but new ideas.

Similar Posts

Rate this post

Leave a Comment