https://soakensissies.com/iH8kuW0vDufzc997/96566

Overcoming the Twin Traps of AI


For all the capabilities enabled by advances in generative AI technology in the past few years, problems in the underlying architecture are holding it back in multiple ways.

Counterintuitive AI is a company attempting to reinvent the AI reasoning stack to address those issues, and it believes that current LLM technology suffers from what the company calls the Twin Traps problem.

Gerard Rego, founder of Counterintuitive AI, has spent a career spanning industry and academia, holding tech leadership positions at companies like Nokia, GM India, and MSC Software, as well as being a fellow at Stanford University, The Wharton School of Business at the University of Pennsylvania, and Cambridge University.

He believes that the first trap of these Twin Traps relates to the fact that modern LLMs run on floating point arithmetic, which is designed for performance rather than reproducibility. With this mathematical foundation, every operation introduces rounding drift and order variance because fractions are rounded to the closest number that can be represented in binary, leading to the same computation resulting in different answers across different runs or machines.

“Imagine you have 2 to the power of 16 digits,” said Gerard Rego, founder of Counterintuitive AI. “Every time you run the machine, you’re going to pick up one of the possibilities in that number. So let’s say this time it picks up the 14th digit and answers you. You are going to say ‘this is a little different from the previous answer.’ Yeah, because it’s probabilistic math so the number might be similar but it’s not reproducible.”

The second issue is that current AI models are memoryless, as they build on something called Markovian Mimicry, which essentially comes to a conclusion based on current state rather than past history (ie predicting the next word in a sentence based only on the word that came before it). In other words, they predict the next token without retaining the reasoning that led it to that output.

Both of these issues contribute to AI and the GPUs powering it using a lot of energy, leading to negative implications for the environment.

These Twin Traps also result in several bottlenecks:

  • Physics ceiling: At some point making chips smaller does not stabilize unstable math
  • Compute ceiling: Adding more chips multiplies inconsistency instead of improving performance
  • Energy and capital ceiling: Power and money are wasted on correcting computational noise

“I’m a visiting fellow at Cambridge and in 2019, 2020, I was sitting there and talking to a bunch of folks and saying ‘hey, this AI thing is going to collapse on its head in about five to six years,’ and that’s because they’re going to hit a floating point wall and energy wall,” Rego said.

He explained that today’s AI technology was built on these concepts that were developed between the 70s and 90s and there hasn’t really been anything terribly groundbreaking in the last 30 years, which is what is driving Counterintuitive AI to go back to the drawing board to build something different from the ground up that may address the current limitations. He believes that the next big leap in AI will come from re-imagining how machines think, rather than trying to continue scaling compute, and wasting a lot of energy and money in the process.

This new approach follows four main principles:

  • A reasoning-first architecture where the AI can justify its choices
  • Systems that measure the energy cost of every decision
  • Auditable logic of every reasoning step
  • Human-in-the-loop design where humans are augmented by AI instead of replaced

The company plans to measure progress not via benchmarks, but by how well the systems consistently reproduce reasoning, how safely they act when uncertain, and how energy efficient they are.

“We said let’s build a non-floating point approach, what we call deterministic mathematics. Let’s write software, which is not memoryless. So it’s actually inheriting the lineage of your thought process. Every time you interact, it understands the cause and effect, not just the fundamental question of grammar,” Rego said.

The company recently announced it is working on creating a new type of reasoning chip called an artificial reasoning unit (ARU) that executes causal logic, memory lineage, and verifiable deduction. It referred to the ARU as initiating the “post-floating point GPU era of computing.”

The company also plans to develop a full reasoning stack to complement the ARU, which it believes will enable anyone to build systems that “can reason with traceable logic, remember decisions and reproduce truth at scale, all with margins of safety.”

With this new stack, the reasoning behind an answer would be more publicly available, as opposed to how now much of the knowledge of how these generative AI systems actually work is limited to a few companies and labs.

“Scientific progress accelerates when ideas are transparent and tools are accessible. We will create interfaces for experimentation and build a community around deterministic reasoning—spanning hardware, logic, and theory. Our work stands on the shoulders of scientific tradition: when intelligence becomes reproducible, knowledge compounds faster,” the company believes.



Source link

Leave a Comment