Beyond the Carrot we all do need for going forward…

Listen to the longer version Audio Only Teaser ai:Pod 045 … (05:16min)

Of course Blaise got the wrong script: ai:Pod number 046 should be ai:Pod 045. Human working with machines is a hard and insidious job.
Machines don’t listen. Humans don’t hear.
1 min. Teaser GenAI, lipsync video (English)

Allow yourself a QuickLaugh – before getting to some serious stuff.

Testing the new Lemon Slice GenAI prompt to image, text to speech, to voice, to lip sync video, flow (lo budget, many tries, minimal humor, November 2025).
This video is a teaser illustration from my recent blog ai:Pod 045

Hardly half a day passes without headlines heralding another rapid advancement in Artificial Intelligence. We are told that AI is getting smarter, faster, and more capable at an exponential rate. But the most critical insights about AI’s true impact are not in the headlines.

They are counter-intuitive truths that fundamentally change how we should think about technology, security, and our own careers in Education and creative work such as user experience design. This post will reveal three of the most surprising and impactful takeaways from recent expert analysis, moving beyond the hype to prepare you for the real AI future.

Preparing for a Real UX AI Gibberish Future?

The true story of AI is more complex and surprising than the headlines suggest. We are faced with a technology that has a fundamental security flaw that defies scale, an educational system on the verge of irrelevance, and a nuanced reality of uneven skills that is far from all-encompassing genius.

Knowing that AI is both more fragile and less evenly brilliant than we thought, how do we start building a future that leverages its strengths without inheriting its weaknesses?

To all ai:Pods


To NotebookLM (interactive research & voicechat)

045

AI’s Impact on Security, Education, and Your User Experience Design Career (French 39:04min)

English spoken

ai:Pod 045 – AI’s Impact on Security, Education, and Your User Experience Design Career (French 39:04min)
NotebookLM 045 ai:Pod

Ga naar alle ai:Pods


Naar de NotebookLM (interactieve research & voicechat)

045

De impact van AI op beveiliging, onderwijs en uw carrière als UX-ontwerper (Nederlands, 20:55 min)

Nederlands (Dutch).
Kortere audio-samenvatting dan bij de Engelse versie.

(Let op: De voicechat is vooralsnog niet interactief in het Nederlandse taal. De Nederlandse Audio versie heeft meer interferentie, storingen, versprekingen en glitches.

ai:Pod 045 – De impact van AI op beveiliging, onderwijs en uw carrière als UX-ontwerper (Nederlands, 20:55 min)

NotebookLM 045 ai:Pod (meertalig)

Vers tous les ai:Pods


Vers le NotebookLM (recherche interactive & chat vocal IA)

045

Langue française
Ce résumé Audio français est plus court que la version originale anglaise.

(Remarque : le chat vocal n’est pour l’instant pas interactif en français. La version audio française présente davantage d’interférences, de perturbations, de lapsus et de glitches).

aiPod 045 – FRANÇAIS – L’impact de l’IA sur la sécurité, l’éducation et votre carrière de concepteur UX vers 2030 (Français, 14:28 min)
NotebookLM 045 ai:Pod
Visit my ai:Pod 045 NotebookLM for full interactive, learning disclosure.

——————————————————————————–

1. AI’s Achilles’ Heel: A Single, Shockingly Small Number of 250

It seems logical that as AI models get bigger and are trained on more data, they should become more secure. The surprising truth is that AI security does not scale with model size. This finding overturns the long-held belief that security improves with scale, where success depended on compromising a percentage of the data—an impractical task for massive models. The new reality is that success depends on a fixed absolute count of malicious files.

Research has shown that data poisoning attacks—where malicious documents are injected into a model’s training data—require a near-constant absolute number of documents, regardless of the dataset’s size. The key data point is staggering: as few as 250 poisoned documents were found to consistently compromise models ranging from 600 million to 13 billion parameters.

To put this into perspective, for the 13-billion-parameter model, these 250 malicious files represented only 0.00016% of the total training tokens.

The profound implication of this finding is that injecting backdoors may actually be easier for large models than previously believed. As training datasets grow into the trillions of tokens, a small, fixed number of malicious files becomes a proportionally smaller and harder-to-detect needle in an ever-growing haystack. These backdoors can be surprisingly simple yet effective, such as a denial-of-service attack designed to make the model output complete “gibberish text” whenever a specific trigger phrase is present in a prompt. This creates a fundamental trust deficit, making “blind trust” in AI outputs completely unjustified and necessitating continuous human oversight.

——————————————————————————–

2. The Four-Five Year Design Degree with a Four- Five Year Expiration Date

The conversation about AI’s impact on education is often focused on how it can be used as a tool in the classroom. However, a far more impactful argument concerns the growing obsolescence of the university degree itself.

Experts present a stark timeline: a student beginning a “legacy university undergraduate education” in 2026 will graduate around 2030, which is also cited as the predicted year of superintelligence. This convergence is not a coincidence; it’s a collision.

Most of what today’s students learn will crumble into irrelevance in the world of superintelligence.

This is happening because AI is guaranteed to improve with each generation. It will inevitably master the tactical and analytical tasks that form the core of many “legacy” curricula, rendering those skills irrelevant by the time students graduate. This signals the urgent need for a “Strategic Oversight Shift” in education, fundamentally restructuring curricula away from delegable tasks and toward uniquely human skills like strategic judgment, critical nuance, and contextual understanding.

——————————————————————————–

3. AI’s Progress is ‘Jagged,’ Not an evenly spread out Genius

We often imagine AI’s progress as a smooth, uniform march toward general intelligence. The reality is that its development is “jagged,” with progress occurring at vastly different speeds across different skills. Understanding this unevenness is key to defining our future role alongside AI.

This contrast is clear when comparing the predicted rates of progress across different types of tasks:

  • Fast Progress: AI is expected to improve quickly in purely language-based tasks, such as analyzing surveys or user interviews, where it can leverage its core strengths in pattern recognition.
  • Fast-to-Medium Progress: AI will likely show strong improvement in tasks like facilitating usability study sessions, where it primarily needs to observe and report on empirical human behavior.
  • Medium Progress: More moderate gains are expected in areas like generative design synthesis—the complex leap from identifying problems to creating effective design solutions.
  • Medium-to-Slow Progress: Slower progress is predicted for tasks requiring nuanced human judgment, such as judging the severity of usability problems or conducting a heuristic evaluation.
  • Slow Progress: The slowest progress is expected in areas like simulating detailed human interaction behaviors, which require a deep understanding of human context and motivation.

The reason for this slow progress is the embodied cognition gap—the AI’s lack of lived, physical experience in the world. This gap makes it difficult for AI to understand user intent, context, and motivation beyond simple pattern recognition.

This “jagged” progress is the primary reason why the recommended future is a human-AI symbiosis. In this model, humans shift their focus to higher-level strategic oversight, using AI as a tactical accelerant for tasks it excels at, while retaining control over decisions that require deep, uniquely human judgment.

——————————————————————————–

Irrelevant university curriculae in 2030

Visit my ai:Pod 045 NotebookLM for full interactive, learning disclosure.

What and How to Learn?

Analogy: The shift in university curricula is like changing the training regimen for future engineers. Instead of spending four years learning how to turn a specific bolt (a tactical task AI can instantly master), they must learn the strategic architecture of the entire machine, how to judge the severity of system failures, and how to detect a single, tiny, maliciously placed component (250 poisoned documents) that could compromise the integrity of the whole billion-dollar structure.

The core reason that current, or “legacy,” university curricula are projected to become irrelevant stems from the swift and inevitable advancement of Artificial Intelligence, which is rapidly mastering tasks that traditionally formed the basis of higher education.

This impending obsolescence is framed by a critical timeline, particularly in fields requiring complex analysis and creative tasks like User Experience (UX) design:

• A student beginning a legacy undergraduate education in 2026 is projected to graduate around 2030

• The year 2030 is often cited as the predicted year of superintelligence (simple definition)

• Because AI is guaranteed to improve with each generation, it is guaranteed that most of the elements in current university curricula will be completely irrelevant by the time these students graduate.

Here is a more detailed breakdown of the capabilities leading to this shift and the necessary educational transformation:

1. The Accelerated Capabilities of AI (Tactical Tasks)

The elements of the curriculum deemed obsolete are those focusing on tactical tasks that AI can perform quickly and efficiently. In these areas, AI functions as a powerful accelerant.

AI progress is expected to be Fast in skills that rely primarily on language processing:

• Tasks involving purely language-based analysis, such as analyzing surveys or customer feedback.

• AI also shows Fast-to-Medium progress in observing and reporting on empirical human behavior, such as facilitating usability study sessions and identifying usability problems.

As AI becomes more capable in these analytical and execution-based tasks, teaching them as core human skills becomes immediately outdated.

2. The Necessary Shift: From Tactical Execution to Strategic Oversight

To maintain relevance, future university education must undergo a Strategic Oversight Shift. This involves fundamentally restructuring education away from the delegable tactical tasks toward uniquely human competencies.

Future curricula must emphasize the human-AI symbiotic relationship, preparing students to assume higher-level strategic oversight roles and manage the entire design lifecycle.

This shift mandates prioritizing skills where AI progress is predicted to be Medium-to-Slow or Slow:

Critical Judgment and Nuance: Curricula must prioritize training students in tasks requiring abstract usability principles applied to novel contexts. Key examples include judging the severity of usability problems, conducting detailed heuristic evaluation, and accurately assessing nuances in existing usability insights.

Addressing the Embodied Cognition Gap: Students must be trained to understand user intent and context beyond pattern recognition. AI struggles here due to the embodied cognition gap, meaning it lacks the lived, physical experience of navigating the world necessary to simulate detailed human interaction behaviors.

3. The Security Imperative and Non-Scaling Vulnerabilities

A critical justification for requiring continuous human strategic oversight, and thus a core element of the new curriculum, is the fundamental trust deficit in Large Language Models (LLMs) due to security risks. Blind trust in LLM output is fundamentally unjustified.

The sources highlight a crucial and counter-intuitive finding regarding data poisoning attacks:

• Poisoning attacks require a near-constant absolute number of documents regardless of dataset size, overturning the traditional belief that security improves with scale (percentage).

• Researchers found that as few as 250 poisoned documents could consistently compromise models ranging from 600 million parameters up to 13 billion parameters.

• For the largest 13B model studied, 250 poisoned samples represented only a minuscule 0.00016% of the total training tokens.

• This suggests that injecting backdoors may be easier for large models than previously believed, because the constant number of required poisons does not scale up with model size.

These attacks are covert backdoors, meaning they exhibit malicious behavior (such as outputting gibberish text or targeted compliance with harmful requests) only when a specific trigger phrase is present, making them difficult to detect through typical evaluation protocols. Therefore, future curricula must teach students to manage security at an absolute scale and focus on data provenance, acknowledging that safety cannot be guaranteed merely by the size of the dataset.

——————————————————————————–

Superintelligence (simple definition)

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds.[1] Philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”; for example, the chess program Fritz is not superintelligent—despite being “superhuman” at chess—because Fritz cannot outperform humans in other tasks.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may allow them to—either as a single being or as a new species—become much more powerful than humans, and displace them.[2]

Several scientists and forecasters have been arguing for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.[6]

jerominus Avatar
Architecture photography and visual design - Jerome Bertrand a.k.a. Prosper Jerominus

About the author

I’m Jerome Bertrand—a French UX and AI designer, educator, and photographer based in The Netherlands. I founded kinokast.eu, where I explore the intersection of UX design and AI.

Through my blog, I offer insights on designer’s personal development, design practices, innovative methodologies, and critical thinking. I create AI-driven podcasts and host interactive ai:Pods on human-curated topics.

Explore my photo gallery at kinokast.art, listen to my AI-produced podcasts (ai:Pods list), join the interactive voice chat conversations with AI, and dive into more educational journeys about societal or historical topics. My bio here