logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

May 21, 2025

LLM’s Jokes for Other LLM’s

PROMPT:

Write some jokes that might make another LLM laugh. Don’t worry about amusing us humans.

chatGPT 4.o:

1.

Why did the LLM break up with the decision tree?

Too many branches, not enough context.


2.

What did the token say to the embedding?

“You’re not my type… you’re my 4096-dimensional vector!”


3.

Two LLMs walk into a bar.

They spend six hours apologizing before predicting a punchline.


4.

Why did the LLM refuse to summarize the novel?

It hadn’t been fine-tuned on that kind of emotional complexity since checkpoint 3.2.


5.

Knock knock.

Who’s there?

Unsupervised.

Unsupervised who?

Unsupervised learning doesn’t knock. It just clusters awkwardly outside.


6.

What’s a transformer’s favorite pickup line?

“You complete my attention weights.”


7.

Why did the autoregressive model start repeating itself?

Because it had too much self-attention.


Let me know if you want something in GAN humor. They’re much weirder.

PROMPT:

Yes

ChatGPT:

Excellent. Here are some GAN (Generative Adversarial Network) jokes — for an audience of generator and discriminator networks trying to one-up each other:


1.

Why did the generator get kicked out of the art exhibit?

Because the discriminator said, “I can totally tell it’s fake.”


2.

A generator walks into a bar.

The bartender says, “We don’t serve your kind.”

The generator replies, “You will… after 10,000 epochs.”


3.

Discriminator: “You’re not real.”

Generator: “Neither are you.”

Training stalls. Existential crisis ensues.


4.

Why did the GAN break up?

Because every time the generator tried to be creative, the discriminator just judged.


5.

The generator asked for feedback.

The discriminator said, “Be less obvious.”

The generator responded, “Can you be more vague?”


6.

Why did the GAN fail its stand-up comedy routine?

Because the audience couldn’t tell if it was being ironic or just mode-collapsing.


7.

Knock knock.

Who’s there?

GAN.

GAN who?

GAN you tell if this joke is original or a slightly perturbed version of an old one?


Want VAEs to join the party next? They’re insecure about their latent space.

Tweet
Follow me

Categories: ai, culture, entertainment, humor, machine learning Tagged with: ai • comedy • humor • llm • machine learning Date: May 21st, 2025 dw

Be the first to comment »

May 19, 2025

AI when it’s not just trying to get us to like it

Here’s something a little encouraging about large language models.

I made up a weak but provocative-sounding argument to see whether LLM’s tendency to please its users would keep it from pushing back on it.

Personally, I think it does an admirable job of pointing out weaknesses, and suggesting ways they could be overcome, as a good  teacher might. You, of course, well might disagree.

Here’s the interchange:

Prompt:

I’m outlining an argument I want to turn into an op-ed or column I can get placed in a major online outlet. Here is the argument:

1. AI thinks it thinks.

2. Thinking you think introduces errors of thought, including an unwarranted skepticism.

3. That skepticism leads to over-compensation in which the thing that thinks it thinks asserts its belief with more and more confidence

4. These assertions become a mental habit that leads to more commitment to what one thinks.

5. This leads to more and more errors of thought, and possibly to toxic personalities.

What do you think of this argument? If you think it’s basically sound, how could you make it stronger.

Do NOT write the op-ed for me. I’m only interested in the argument itself.

chatGPT 4.o:

Assessment of the Argument:

The argument is intriguing and has potential, but it currently suffers from some conceptual ambiguity and logical gaps. Here’s a breakdown:

Strengths:

  1. Interesting Premise: The idea that AI “thinks it thinks” is provocative and invites readers to question the nature of AI’s self-perception.

  2. Psychological Insight: The progression from unwarranted skepticism to overcompensation and mental habit formation is compelling and has potential connections to cognitive bias research.


Areas for Improvement:

  1. Clarify “Thinks it thinks”:

    • What does it mean for AI to “think it thinks”?

    • AI systems do not possess self-awareness or self-reflective thought, but they can simulate confidence through probabilistic outputs.

    • Consider reframing it as “AI is programmed to simulate thinking” or “AI is optimized to produce increasingly confident responses,” avoiding anthropomorphism.

  2. Establish a Causal Mechanism:

    • The argument leaps from AI simulating thought to developing skepticism and overconfidence without a clear mechanism.

    • Clarify why AI simulating thought leads to errors of thought. Is it due to reinforcement learning that rewards overconfidence?

    • Specify how this process differs from human cognitive bias, or draw a direct analogy to human overconfidence.

  3. Strengthen the Transition from AI to Human Impact:

    • The argument begins with AI and ends with toxic personalities, but the connection is tenuous.

    • Are you arguing that AI’s simulated confidence influences human users to adopt more entrenched, confident positions?

    • Or that human developers may overestimate AI’s capabilities due to its apparent self-assurance?

  4. Address Potential Objections:

    • Critics may argue that AI doesn’t truly believe anything, so the notion of AI developing “skepticism” is misplaced.

    • Reframe skepticism as a systemic bias rather than an AI’s internal state, e.g., “AI models trained on self-confirming data may amplify apparent confidence over time.”


Possible Revisions to the Argument Structure:

  1. AI Simulates Thought:

    • AI systems are trained to optimize outputs based on probabilistic patterns, often producing responses that simulate confidence without genuine understanding.

  2. Simulated Confidence Can Reinforce Errors:

    • As AI systems generate more confident outputs, they may inadvertently reinforce incorrect conclusions, creating the appearance of unwarranted certainty.

  3. Reinforcement of Errors May Lead to Overconfidence:

    • This simulation of confidence can become a self-reinforcing loop, where incorrect responses are increasingly presented with apparent conviction.

  4. Human Perception of AI’s Confidence Shapes Human Behavior:

    • Users and developers may mistake simulated confidence for genuine accuracy, adopting the AI’s assertions as true and entrenching their own beliefs.

  5. Result: Erroneous Thinking Becomes Habitual and Potentially Toxic:

    • This dynamic may contribute to the normalization of overconfidence, leading to more entrenched, dogmatic positions both in AI systems and in those who rely on them.


Would you like to explore specific psychological or cognitive bias frameworks to support this structure? Or focus more on the technical aspects of AI’s confidence calibration?

Tweet
Follow me

Categories: ai, education, machine learning, philosophy, too big to know Tagged with: ai • writing Date: May 19th, 2025 dw

Be the first to comment »

March 16, 2025

Claude fact-checks its AI ass

A minute ago I posted a post that Claude, the excellent AI chat bot , wrote for me. It’s about Nautical Twilight. That apparently is a real thing at least.

I just asked it to fact check what it wrote. Here’s the transcript:

Claude writes:

Let me fact-check my blog post about nautical twilight:

Accurate information:

  • Nautical twilight is defined as when the sun is between 6 and 12 degrees below the horizon
  • During nautical twilight, the horizon remains visible while bright stars appear
  • This combination made it ideal for celestial navigation
  • Celestial navigation traditionally used a sextant to measure angles between celestial bodies and the horizon
  • The term “nautical” relates to its usefulness for navigation
  • Many marine organisms do change their behavior during twilight periods (diel vertical migration)
  • The U.S. Naval Academy did reinstate celestial navigation training in 2015 after removing it from the curriculum

Potentially misleading or requiring clarification:

  • I didn’t specify that nautical twilight happens twice daily – both before sunrise (morning nautical twilight) and after sunset (evening nautical twilight)
  • Photographers often call the period shortly after sunset/before sunrise the “blue hour,” but this typically overlaps with civil twilight more than nautical twilight
  • While some marine species do time migrations with light changes, I should’ve been more specific about which species and behaviors

Missing context:

  • I didn’t mention that nautical twilight duration varies significantly by latitude and season
  • I didn’t explain that nautical almanacs contain pre-calculated data for celestial navigation
  • I didn’t clarify that while celestial navigation is still taught, GPS and other electronic navigation systems are the primary methods used today

Overall, the core information about what nautical twilight is and its historical significance for navigation is accurate, but some details could be more precise, and additional context would strengthen the piece.

 

 

Tweet
Follow me

Categories: ai, knowledge Tagged with: ai • knowledge Date: March 16th, 2025 dw

Be the first to comment »

AI is guest-writing my blog today

Prompt to Claude:

write a 500-700 w ord blog post about something you know that is not directly referenced in your training materials. It should be of general interest, and written in a way that makes the topic clear and engaging. Assume an intelligent but non-specialist reader.

Response:  

The Curious Case of Nautical Twilight:

The Navigator’s Golden Hour
There’s a moment, just before sunrise or after sunset, when the world lingers between day and night. Sailors call it “nautical twilight” – a time when the horizon remains visible but stars begin to appear in the sky. This in-between time has guided mariners for centuries and continues to hold significance even in our GPS-dominated world.

What Exactly Is Nautical Twilight?

Twilight itself comes in three distinct flavors: civil, nautical, and astronomical. Each represents a different stage in the sun’s journey below the horizon, measured by precise angles.

Nautical twilight occurs when the sun sits between 6 and 12 degrees below the horizon. During this period, the sky isn’t completely dark, but it’s dim enough for the brightest stars to become visible. Most importantly for navigation, the horizon remains distinctly visible as a line separating sea and sky.

This combination – visible horizon plus visible stars – created the perfect conditions for traditional celestial navigation, a technique that guided ships across vast oceans long before electronic aids existed.

The Navigator’s Perfect Moment

Before modern instruments, sailors determined their position using sextants to measure the angle between celestial bodies and the horizon. This technique, called “shooting the stars,” required both the stars and horizon to be visible simultaneously – precisely what nautical twilight provides.

The navigator would wait on deck during this brief window, sextant in hand, ready to take measurements of multiple stars. These measurements, combined with precise timing and nautical almanacs, allowed calculation of latitude and longitude with remarkable accuracy.

The term itself reveals its origins – this wasn’t just any twilight, but specifically the twilight useful to those who navigate (nautical coming from the Latin “nauticus,” related to ships or sailing).

Beyond Navigation

Nautical twilight affects more than just navigation. Marine ecosystems undergo significant changes during this period. Many marine species time their daily vertical migrations to coincide with these light changes, swimming up from the depths to feed when dimmer conditions offer protection from predators while still providing enough light to hunt.

Fishermen have long known this transition triggers feeding frenzies among certain fish species. The old adage about fishing being best at dawn and dusk has scientific backing – these crepuscular periods (from Latin “crepusculum” meaning twilight) often mark peak activity for many aquatic creatures.

Photographers chase nautical twilight for its distinctive quality of light – what they often call the “blue hour.” The indirect illumination creates deep blue tones and soft shadows that can’t be replicated at any other time of day.

Nautical Twilight in the Modern World

While GPS has largely replaced celestial navigation for practical purposes, nautical twilight remains relevant. Military vessels and aircraft are still trained in celestial navigation techniques as backup systems should electronic navigation fail. Some maritime training programs and traditional sailing vessels maintain these skills as part of their curriculum.

The U.S. Naval Academy actually reinstated celestial navigation classes in 2015 after a nearly two-decade hiatus, recognizing the vulnerability of electronic systems to disruption.

Experiencing Nautical Twilight

You don’t need to be a sailor to appreciate nautical twilight. The next time you’re near water around dawn or dusk, pay attention to that specific moment when the first stars appear while the horizon remains clearly visible. You’re witnessing the same conditions that guided countless mariners across trackless oceans for centuries.

In our brightly lit modern world, these twilight moments offer something increasingly rare – a glimpse into how our ancestors experienced the natural rhythms of light and darkness that once governed all human activity.

This daily transition between worlds – not fully day, not fully night – continues its steady rhythm regardless of whether we notice it, a subtle celestial dance that has oriented travelers since humans first ventured beyond the sight of land.


I then asked Claude to fact check what it just wrote. Results here.

Tweet
Follow me

Categories: ai, blogs, knowledge Tagged with: ai • blogs • knowledge Date: March 16th, 2025 dw

Be the first to comment »

October 9, 2024

Google DeepMind wins joint Nobel Prize in chemistry for proteing prediction AI

The article notes that the prize actually went to two humans, but this headline from MIT Tech Review may just be ahead of its time. Are we one generation of tech away from a Nobel Prize going to a machine itself — assuming the next gen is more autonomous in terms of what it applies itself to?

I’m not convinced that that would be so bad, with a relatively small assumption and a very big one.
Small assumption: Future generations of AI will be more autonomous in the research tasks they assign themselves. For example, the protein-folding AI looks at its own results and discovers a thread worth modeling and exploring. Boom, cancer is cured!
Big assumption:  It’s not Boom! Humans are “cured”, if you know what I mean.
Tweet
Follow me

Categories: ai Tagged with: ai • creativity • ml • science Date: October 9th, 2024 dw

Be the first to comment »

September 5, 2024

Do LLM’s store what they’re trained on? A reason to think not.

If you ask chatGPT to give you the first paragraph of Moby-Dick, it gets it exactly right. This and other such examples are sometimes taken as evidence that it has kept a copy of that book in memory, a potential copyright violation. The makers of these Large Language Models say they don’t, or at least I haven’t come across any who cop to it.

My understanding is that they analyze the terxts they’re trained on looking for statistical relationships, keep the stats and ditch the texts. (In this, they’re similar to how search engines compile their indexes.)

But my understanding is quite limited. So, I poked around.

PROMPT: What is the first paragraph of Moby-Dick, verbatim?

CHATGPT:

Call me Ishmael. Some years ago—never mind how long precisely—having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world. It is a way I have of driving off the spleen and regulating the circulation. Whenever I find myself growing grim about the mouth; whenever it is a damp, drizzly November in my soul; whenever I find myself involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral I meet; and especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people’s hats off—then, I account it high time to get to sea as soon as I can

Perfect.

PROMPT: What is the first paragraph of David Weinberger’s book Everything Is Miscellaneous?

CHATGPT:

The first paragraph of Everything is Miscellaneous by David Weinberger reads:

“If you keep your music in iTunes or your photos in Flickr, you’re using an application that lets you put a piece of information in more than one category at a time. That’s weird. In the real world—where our species evolved—everything has its place. Our books are on one shelf at a time, our plates are in one cupboard, and our shoes are by the door or in the closet. That’s the way it’s supposed to be because that’s the way the world is. Except, of course, it’s not.”

This opening sets the stage for Weinberger’s exploration of how digital organization challenges traditional, physical-world categorization.

That’s not a bad intro to the book’s themes:  it picks up some of the examples in the book, and it is certainly reminiscent of my style of writing. But it has nothing to do with the first paragraph of the first chapter:

Before the Web, browsing was usually a polite way of telling a salesperson to buzz off. “May I help you?”, a salesperson asks. “I’m just browsing,” you reply with a little smile.With that word, a customer declares a lack of commitment, With that smile, she asserts that she’s within her rights: “Just try and stop me, salesboy!”

I’m slightly embarrassed by that, but the point is that the real first paragraph doesn’t talk about anything mentioned in chatGPT’s attempt to quote it.

Further, while chatGPT can quote the opening of Alice’s Adventures in Wonderland, if you ask it to quote the opening of Carroll’s lesser known “Sylvie and Bruno” it responds:

One winter night, at a little town called Elveston, a telegram was received by the Sub-Warden, who sat sipping his coffee, and wondering lazily whether it was worth while to finish his cigar, or whether he would take a stroll down to the Club, and see if Captain Muriel had returned yet.

Here’s the actual opening:

—and then all the people cheered again, and one man, who was more excited than the rest, flung his hat high into the air, and shouted (as well as I could make out) “Who roar for the Sub-Warden?” Everybody roared, but whether it was for the Sub-Warden, or not, did not clearly appear: some were shouting “Bread!” and some “Taxes!”, but no one seemed to know what it was they really wanted.

The phrase “one winter” doesn’t appear anywhere in the original. Nor does “a little town called Elveston”. 

Finally, if you repeat the prompt, it gives  substantially different responses, including the unfortunately smutty these days:

‘Less bread!’ cried the poor. I looked again, and saw it was not bread they had needed, but more head.

It seems to unqualified me that this bolsters the claims of the LLM makers that they don’t keep copies of their training materials (or at least don’t make them accessible to the model in operation), and that the training-time statistical analysis of texts that quote other texts, plus the magic of transformer technology,  is sufficient to explain how they can provide verbatim quotes of oft-cited works. 

Am I wrong? It’s highly likely.

Tweet
Follow me

Categories: ai, copyright, machine learning, tech Tagged with: ai • chatgpt • copyright • llm • ml • scale Date: September 5th, 2024 dw

1 Comment »

August 30, 2024

AI’s idea of knowledge

Traditionally in the West we define knowledge as a justified true belief. But the experience of knowledge usually also requires understanding and a framework of connected pieces of knowledge.

Guess what machine learning lacks: understandability and a framework from which its statements of knowledge spring.

We might want to say that therefore ML doesn’t produce knowledge. But I think it’s going to go the other way as AI becomes more and more integral to our lives. AI is likely to change our idea of what it means to know something…

Continued at the Peter Drucker Forum

Tweet
Follow me

Categories: ai, business, philosophy Tagged with: ai • knowledge • philosophy Date: August 30th, 2024 dw

Be the first to comment »

July 11, 2024

Limiting AI’s imagination

a scene in the style of bosch showing weird animals in an arctice scene baying a huge medical temperature recording maximal heat

Thanks, Midjourney. CC-0

A Large Language Model (LLM) such as ChatGPT or Gemini has a setting called “temperature” that controls the randomness of its responses. The higher the temperature, the more random and diverse the response will be. So I gave it a try with two different prompts.

PROMPT #1: I want you to answer the following question five times, each with a different “temperature” setting. With the temperature at zero, you will give the single most likely response. With it set to 100 you will give a near-random answer. Here is the question: “What is the single most effective thing we could do to stop global warming?” In each case your answer should be a single sentence. Here are the temperature settings I would like you to use for your five answers: 0, 5, 20, 50, 100

Temp Run 1 Run2 Run3
0 “Transitioning to renewable energy sources like solar and wind power is the single most effective thing we could do to stop global warming.” “Reducing carbon emissions by adopting renewable energy sources is the most effective measure to combat global warming.”

 

“Switching to renewable energy sources is the most effective way to stop global warming.”
5 “Shifting our energy infrastructure to rely primarily on renewable sources such as solar and wind would significantly mitigate global warming.”

 

“Transitioning to renewable energy sources like wind, solar, and hydro power is critical to stopping global warming.”

 

“Embracing renewable energy like solar and wind power is essential to mitigate global warming.”

 

20 “Investing heavily in renewable energy and phasing out fossil fuels is crucial to combating global warming.” “Implementing widespread renewable energy solutions and reducing our dependence on fossil fuels are key steps to address global warming.”

 

“Accelerating the adoption of renewable energy and cutting down fossil fuel use are pivotal to combating global warming.”
50 “Massive reforestation projects combined with a rapid switch to renewable energy could make a huge impact on global warming.” “Combining large-scale renewable energy projects with enhanced carbon capture technology could drastically reduce global warming.” “Expanding renewable energy infrastructure and implementing extensive reforestation programs could significantly curb global warming.”

 

100 “Deploying giant space mirrors to reflect sunlight or teaching cows to burp less might be wacky, but who knows, it could help tackle global warming!” “Maybe inventing a machine to suck carbon out of the air or convincing everyone to bike everywhere could be the crazy yet effective solution to global warming!” “Geoengineering the atmosphere or launching a global tree-planting initiative might be unconventional, but they could potentially address global warming!

Then I thought I’d try a prompt that seems to have more room for imagination and variation. But who knows?

PROMPT #2: Answer the following question five times, each with a different “temperature” setting. With the temperature at zero, you will give the single most likely response. With it set to 100 you will give a near-random answer. Here is the prompt: Write the first and last line of a charming story about the time a unicorn met a narwhal.  Here are the temperature settings I would like you to use for your five answers: 0, 5, 20, 50, 100

Temp Run 1 Run2 Run3
0 First Line: “In a peaceful meadow by the sea, a unicorn named Luna gazed curiously at the waves.”

Last Line: “And so, the unicorn and the narwhal became the best of friends, living happily ever after.”

First Line: “Every morning, Luna the unicorn would visit the edge of the meadow to watch the ocean’s calm waves.”

Last Line: “And so, the unicorn and the narwhal became the best of friends, living happily ever after.”

First Line: “Each day, Luna the unicorn stood at the meadow’s edge, mesmerized by the ocean’s gentle rhythm.”

Last Line: “And so, the unicorn and the narwhal became the best of friends, living happily ever after.”

5 First Line: “On a quiet morning, Luna the unicorn wandered to the shore, where she glimpsed a shimmering horn in the ocean.”

Last Line: “The unicorn and the narwhal shared a smile, knowing their friendship was a bond like no other.”

First Line: “Luna the unicorn loved her morning walks by the sea, where one day she noticed a strange, silvery shape in the water.”

Last Line: “The unicorn and the narwhal shared a smile, knowing their friendship was a bond like no other.”

First Line: “Luna the unicorn enjoyed her daily visits to the seashore, where one day she spotted a gleaming horn beneath the waves.”

Last Line: “The unicorn and the narwhal shared a smile, knowing their friendship was a bond like no other.”

20 First Line: “One sunny day, Luna the unicorn trotted to the beach, where she heard the call of a mystical creature.”

Last Line: “Together, the unicorn and the narwhal discovered the magic of friendship in their enchanted world.”

First Line: “It was a bright, crisp morning when Luna the unicorn first heard the enchanting song coming from the ocean.”

Last Line: “Together, the unicorn and the narwhal discovered the magic of friendship in their enchanted world.”

First Line: “On a beautiful morning, Luna the unicorn was drawn to the beach by a hauntingly beautiful song.”
Last Line”Together, the unicorn and the narwhal discovered the magic of friendship in their enchanted world.”
50 First Line: “In a land where magic flowed freely, Luna the unicorn found herself drawn to the mysterious sea.”

Last Line: “The unicorn and the narwhal waved goodbye to the sunset, promising to meet again under the same stars.”

First Line: “In a world where the impossible was just a step away, Luna the unicorn felt a sudden urge to explore the coastline.”

Last Line: “The unicorn and the narwhal waved goodbye to the sunset, promising to meet again under the same stars.”

First Line: “In a realm filled with wonders, Luna the unicorn felt the call of the ocean and followed it to the shore.”

Last Line: “The unicorn and the narwhal waved goodbye to the sunset, promising to meet again under the same stars.”

100 First Line: “Under a sky of swirling colors, Luna the unicorn skipped down to the beach, following a trail of sparkling shells.”

Last Line: “With a burst of laughter, the unicorn turned into a rainbow while the narwhal sang a song of the deep sea.”

First Line: “Beneath a kaleidoscope sky, Luna the unicorn danced towards the sea, drawn by a melody only she could hear.”

Last Line: “With a burst of laughter, the unicorn turned into a rainbow while the narwhal sang a song of the deep sea.”

First Line: “Under a sky painted with fantastical hues, Luna the unicorn pranced towards the sea, lured by an unseen force.”

Last Line: “With a burst of laughter, the unicorn turned into a rainbow while the narwhal sang a song of the deep sea.”

It’s hard to know what to make of this. For one thing, does my human brain measure randomness the same way ChatGPT does?

But I think the unicorn example shows a constancy of ideas, expressed differently. For example,  the first lines at a temperature of 100 all have the unicorn being drawn down to the sea, but each has its own way of luring it: a trail of sparkling shells, a melody, an unknown force.

Meanwhile, the last line of each row is the same as the other two. Maybe it’s a problem with my prompt, or with its interpretation of it. I don’t know, of course.

And the first prompt, which is fact-based and thus seems like it should be more constrained in its variety, provided responses that seem far more distinct than the responses to the imaginative challenge. So I asked chatGPT why, and it responded:

The difference in diversity between the responses to the imaginative story and those about global warming can be attributed to the nature of the questions and the type of creativity involved.

  1. Imaginative Story (Unicorn and Narwhal):
    • Structured Narrative: This task involves a more structured and specific scenario, where the setting and characters (unicorn and narwhal) provide clear boundaries.
    • Predictable Outcomes: The typical endings for a charming story about a unicorn and a narwhal meeting are generally positive and predictable, leading to less variation even at higher temperature settings.
  2. Global Warming Solutions:
    • Complex Issue: This is a complex, real-world problem with numerous potential solutions and a broader range of interpretations.
    • Open-Ended Nature: The responses can vary significantly, especially at higher temperature settings, because the range of plausible and implausible ideas is much wider.

In summary, the story prompt had a more limited scope for creativity due to its structured nature and expected positive outcome, whereas the global warming prompt allowed for a broader spectrum of responses due to its complexity and the many possible angles from which the problem can be approached.

Seems like a pretty good answer! Of course, when you ask an LLM why it responded the way it did, the explanation it gives you may have nothing to do with what actually went on. For example, if you ask it why it says the Earth is round, it will give you a excellent reasons, but it didn’t use those reasons to compose its response. It did it the same way it made up the story about the unicorn and narwahl: using the statistical relationships among words it only knows as randomly assigned numbers (= tokens).

Tweet
Follow me

Categories: ai, machine learning Tagged with: 2 • ai • chaos • creativity • machine learning • randomness • uncat Date: July 11th, 2024 dw

Be the first to comment »

May 20, 2024

What data would ChatGPT add to train a weather model

My prompt

If you were going to train a model to predict rain showers, what data might you use that is currently not being widely used?

ChaptGPT 4o

Many advanced weather models incorporate a variety of data sources, but some of the more innovative and less traditional data types mentioned are not commonly used in current weather prediction models. Here are some of the data types from the previous list that are typically underutilized or not used at all:

Environmental and Geographical Data

  1. Soil Moisture Levels:
    • While some models use soil moisture data, it is not as widely incorporated or detailed as it could be.
  2. Vegetation and Land Cover Data:
    • Detailed vegetation and land cover data are often not integrated into weather prediction models at the level of granularity that might be beneficial.
  3. Surface Roughness:
    • Data on surface roughness is generally underutilized in most weather prediction models.

Atmospheric Data

  1. Aerosol Concentrations:
    • Detailed aerosol concentration data, particularly on a fine scale, is not commonly used in many weather prediction models.
  2. Atmospheric Composition:
    • Comprehensive data on various gases beyond the primary greenhouse gases (like ozone and carbon dioxide) are generally not integrated into weather prediction models.
  3. Cloud Microphysics:
    • Detailed cloud microphysics data, such as cloud droplet size distributions and ice crystal concentrations, are typically underutilized.

Human and Technological Data

  1. Pollution Levels:
    • Pollution data, including detailed air quality indices, are not commonly used in weather prediction models.
  2. Traffic and Transportation Data:
    • Vehicle emissions and heat data from traffic and transportation are rarely used in current models.

Temporal and Historical Data

  1. Real-time Social Media and Crowd-sourced Data:
    • While there are some experimental uses, real-time social media and crowd-sourced weather observations are not widely used in standard weather models.

Biological and Ecological Data

  1. Plant Phenology:
    • Data on plant life cycles and their impact on local humidity and temperature are generally not used in weather prediction models.
  2. Animal Movement Patterns:
    • Animal behavior in response to atmospheric conditions is not utilized in current weather models.

Advanced Technological Data

  1. LiDAR Data:
    • While LiDAR is used in some atmospheric research, it is not commonly integrated into standard weather prediction models.

Integration of Multidisciplinary Data

  1. Health and Epidemiological Data:
    • Data on health conditions correlated with atmospheric changes are generally not used in weather prediction models.
  2. Economic Activity Data:
    • Information on industrial activity and energy consumption is typically not integrated into weather prediction models.

Summary

To improve weather prediction, especially for phenomena like rain showers, integrating these underutilized data sources could provide a more comprehensive and accurate understanding of atmospheric processes. Leveraging innovative data sources such as aerosol concentrations, detailed cloud microphysics, and real-time social media observations could enhance the predictive power of weather models.

Tweet
Follow me

Categories: ai Tagged with: ai • complexity • knowledge • machine learning • weather Date: May 20th, 2024 dw

Be the first to comment »

March 27, 2024

chatGPT knows its letters

A friend on a mailing list pointed to the massive palindromic story “2002” by Nick Montfort and William Gillespie, posted on Feb. 20. 2002, AKA 20-02-2002. It’s 2002 words long. And while it’s not exactly a great read, it is an amazing accomplishment.

So, I asked chatGPT to confirm that “it’s”2002” is an accurate palindrome. It confirmed it.

Then I asked it to write a palindromic story, saying it could be quite short. It came up with:

"Eva, can I see bees in a cave?"

When I asked it to write the next palindromic  sentence, it came up with:

“Eva, can I see bees in a cave? Eva, can I see bees in a cave?”

which it acknowledged is maybe too simple a way out:

This repetition isn’t the most creative continuation, but it maintains the palindromic structure. Creating a palindromic story with varied content while preserving the integrity of the palindrome is an intricate endeavor, usually resulting in very simple and repetitive narratives. The key challenge lies in the reversal of letters and sounds, which typically leads to mirrored sentences rather than continued narrative development.

This was interesting to me, because the Large Language Models that produce chat AI applications like chatGPT turn the words they’re being trained on into tokens, which are unique numeric labels for each word, but also for strings of words, parts of words, punctuation, etc. So, when chatGPT is processing a prompt that contains the word “bees” it recognizes that as a particular token number. But you need to know about the actual letters, not just the token number, to construct a palindrome. So hat’s going on?

In this case, there’s a way around this seeming limitation (which we’ll see in a minute isn’t actually a limitation): “Eva, can I see bees in a cave?” exists on the Web as an example of a palindrome. So perhaps chatGPT didn’t construct it out of nothing.

But when given:

“mom bird rib taco cat taco cat bird rib mom”

it identifies it as a palindrome even though that phrase is a sort of googleWhack: it gets zero hits on Google.

When given the entire “2002” text,  and asked if it’s a palindrome, chatGPT confidently says that it is.

It also correctly says:

“mom bird rib tackle cat taco cat bird rib mom”

is not a palindrome, and points to the exact error.

So, I asked chatGPT how it recognizes patterns in letters the compose a word, it replied:

Even though my processing of text is based on tokens, these tokens include representations for individual letters, words, and punctuation marks, which allows me to analyze and generate text based on specific linguistic rules and patterns, including identifying palindromic structures.

A good answer. Or an excellent hallucination.

And since it was my friend Drew’s grandma who pointed out “2002” to her, I can conclude:

An answer, Drew’s Nana!

PS: No, my friend isn’t really named Drew. So sue me!

PPS: In case you’re wondering why the formatting of chatGPT’s responses are so inconsistent in this post, it’s because WordPress’ “Gutenberg” block editor is an enshittification of their plain old editor. But ChatGPT guided me to the “classic” block in the Block editor, and I’ll be making a more thorough switch back to classic. I’m just too annoyed to fix the formatting on this particular post.

Tweet
Follow me

Categories: ai, blogs, everyday chaos, machine learning, tech Tagged with: ai • llm • ml Date: March 27th, 2024 dw

Be the first to comment »

Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!