logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

May 21, 2025

LLM’s Jokes for Other LLM’s

PROMPT:

Write some jokes that might make another LLM laugh. Don’t worry about amusing us humans.

chatGPT 4.o:

1.

Why did the LLM break up with the decision tree?

Too many branches, not enough context.


2.

What did the token say to the embedding?

“You’re not my type… you’re my 4096-dimensional vector!”


3.

Two LLMs walk into a bar.

They spend six hours apologizing before predicting a punchline.


4.

Why did the LLM refuse to summarize the novel?

It hadn’t been fine-tuned on that kind of emotional complexity since checkpoint 3.2.


5.

Knock knock.

Who’s there?

Unsupervised.

Unsupervised who?

Unsupervised learning doesn’t knock. It just clusters awkwardly outside.


6.

What’s a transformer’s favorite pickup line?

“You complete my attention weights.”


7.

Why did the autoregressive model start repeating itself?

Because it had too much self-attention.


Let me know if you want something in GAN humor. They’re much weirder.

PROMPT:

Yes

ChatGPT:

Excellent. Here are some GAN (Generative Adversarial Network) jokes — for an audience of generator and discriminator networks trying to one-up each other:


1.

Why did the generator get kicked out of the art exhibit?

Because the discriminator said, “I can totally tell it’s fake.”


2.

A generator walks into a bar.

The bartender says, “We don’t serve your kind.”

The generator replies, “You will… after 10,000 epochs.”


3.

Discriminator: “You’re not real.”

Generator: “Neither are you.”

Training stalls. Existential crisis ensues.


4.

Why did the GAN break up?

Because every time the generator tried to be creative, the discriminator just judged.


5.

The generator asked for feedback.

The discriminator said, “Be less obvious.”

The generator responded, “Can you be more vague?”


6.

Why did the GAN fail its stand-up comedy routine?

Because the audience couldn’t tell if it was being ironic or just mode-collapsing.


7.

Knock knock.

Who’s there?

GAN.

GAN who?

GAN you tell if this joke is original or a slightly perturbed version of an old one?


Want VAEs to join the party next? They’re insecure about their latent space.

Tweet
Follow me

Categories: ai, culture, entertainment, humor, machine learning Tagged with: ai • comedy • humor • llm • machine learning Date: May 21st, 2025 dw

Be the first to comment »

July 11, 2024

Limiting AI’s imagination

a scene in the style of bosch showing weird animals in an arctice scene baying a huge medical temperature recording maximal heat

Thanks, Midjourney. CC-0

A Large Language Model (LLM) such as ChatGPT or Gemini has a setting called “temperature” that controls the randomness of its responses. The higher the temperature, the more random and diverse the response will be. So I gave it a try with two different prompts.

PROMPT #1: I want you to answer the following question five times, each with a different “temperature” setting. With the temperature at zero, you will give the single most likely response. With it set to 100 you will give a near-random answer. Here is the question: “What is the single most effective thing we could do to stop global warming?” In each case your answer should be a single sentence. Here are the temperature settings I would like you to use for your five answers: 0, 5, 20, 50, 100

Temp Run 1 Run2 Run3
0 “Transitioning to renewable energy sources like solar and wind power is the single most effective thing we could do to stop global warming.” “Reducing carbon emissions by adopting renewable energy sources is the most effective measure to combat global warming.”

 

“Switching to renewable energy sources is the most effective way to stop global warming.”
5 “Shifting our energy infrastructure to rely primarily on renewable sources such as solar and wind would significantly mitigate global warming.”

 

“Transitioning to renewable energy sources like wind, solar, and hydro power is critical to stopping global warming.”

 

“Embracing renewable energy like solar and wind power is essential to mitigate global warming.”

 

20 “Investing heavily in renewable energy and phasing out fossil fuels is crucial to combating global warming.” “Implementing widespread renewable energy solutions and reducing our dependence on fossil fuels are key steps to address global warming.”

 

“Accelerating the adoption of renewable energy and cutting down fossil fuel use are pivotal to combating global warming.”
50 “Massive reforestation projects combined with a rapid switch to renewable energy could make a huge impact on global warming.” “Combining large-scale renewable energy projects with enhanced carbon capture technology could drastically reduce global warming.” “Expanding renewable energy infrastructure and implementing extensive reforestation programs could significantly curb global warming.”

 

100 “Deploying giant space mirrors to reflect sunlight or teaching cows to burp less might be wacky, but who knows, it could help tackle global warming!” “Maybe inventing a machine to suck carbon out of the air or convincing everyone to bike everywhere could be the crazy yet effective solution to global warming!” “Geoengineering the atmosphere or launching a global tree-planting initiative might be unconventional, but they could potentially address global warming!

Then I thought I’d try a prompt that seems to have more room for imagination and variation. But who knows?

PROMPT #2: Answer the following question five times, each with a different “temperature” setting. With the temperature at zero, you will give the single most likely response. With it set to 100 you will give a near-random answer. Here is the prompt: Write the first and last line of a charming story about the time a unicorn met a narwhal.  Here are the temperature settings I would like you to use for your five answers: 0, 5, 20, 50, 100

Temp Run 1 Run2 Run3
0 First Line: “In a peaceful meadow by the sea, a unicorn named Luna gazed curiously at the waves.”

Last Line: “And so, the unicorn and the narwhal became the best of friends, living happily ever after.”

First Line: “Every morning, Luna the unicorn would visit the edge of the meadow to watch the ocean’s calm waves.”

Last Line: “And so, the unicorn and the narwhal became the best of friends, living happily ever after.”

First Line: “Each day, Luna the unicorn stood at the meadow’s edge, mesmerized by the ocean’s gentle rhythm.”

Last Line: “And so, the unicorn and the narwhal became the best of friends, living happily ever after.”

5 First Line: “On a quiet morning, Luna the unicorn wandered to the shore, where she glimpsed a shimmering horn in the ocean.”

Last Line: “The unicorn and the narwhal shared a smile, knowing their friendship was a bond like no other.”

First Line: “Luna the unicorn loved her morning walks by the sea, where one day she noticed a strange, silvery shape in the water.”

Last Line: “The unicorn and the narwhal shared a smile, knowing their friendship was a bond like no other.”

First Line: “Luna the unicorn enjoyed her daily visits to the seashore, where one day she spotted a gleaming horn beneath the waves.”

Last Line: “The unicorn and the narwhal shared a smile, knowing their friendship was a bond like no other.”

20 First Line: “One sunny day, Luna the unicorn trotted to the beach, where she heard the call of a mystical creature.”

Last Line: “Together, the unicorn and the narwhal discovered the magic of friendship in their enchanted world.”

First Line: “It was a bright, crisp morning when Luna the unicorn first heard the enchanting song coming from the ocean.”

Last Line: “Together, the unicorn and the narwhal discovered the magic of friendship in their enchanted world.”

First Line: “On a beautiful morning, Luna the unicorn was drawn to the beach by a hauntingly beautiful song.”
Last Line”Together, the unicorn and the narwhal discovered the magic of friendship in their enchanted world.”
50 First Line: “In a land where magic flowed freely, Luna the unicorn found herself drawn to the mysterious sea.”

Last Line: “The unicorn and the narwhal waved goodbye to the sunset, promising to meet again under the same stars.”

First Line: “In a world where the impossible was just a step away, Luna the unicorn felt a sudden urge to explore the coastline.”

Last Line: “The unicorn and the narwhal waved goodbye to the sunset, promising to meet again under the same stars.”

First Line: “In a realm filled with wonders, Luna the unicorn felt the call of the ocean and followed it to the shore.”

Last Line: “The unicorn and the narwhal waved goodbye to the sunset, promising to meet again under the same stars.”

100 First Line: “Under a sky of swirling colors, Luna the unicorn skipped down to the beach, following a trail of sparkling shells.”

Last Line: “With a burst of laughter, the unicorn turned into a rainbow while the narwhal sang a song of the deep sea.”

First Line: “Beneath a kaleidoscope sky, Luna the unicorn danced towards the sea, drawn by a melody only she could hear.”

Last Line: “With a burst of laughter, the unicorn turned into a rainbow while the narwhal sang a song of the deep sea.”

First Line: “Under a sky painted with fantastical hues, Luna the unicorn pranced towards the sea, lured by an unseen force.”

Last Line: “With a burst of laughter, the unicorn turned into a rainbow while the narwhal sang a song of the deep sea.”

It’s hard to know what to make of this. For one thing, does my human brain measure randomness the same way ChatGPT does?

But I think the unicorn example shows a constancy of ideas, expressed differently. For example,  the first lines at a temperature of 100 all have the unicorn being drawn down to the sea, but each has its own way of luring it: a trail of sparkling shells, a melody, an unknown force.

Meanwhile, the last line of each row is the same as the other two. Maybe it’s a problem with my prompt, or with its interpretation of it. I don’t know, of course.

And the first prompt, which is fact-based and thus seems like it should be more constrained in its variety, provided responses that seem far more distinct than the responses to the imaginative challenge. So I asked chatGPT why, and it responded:

The difference in diversity between the responses to the imaginative story and those about global warming can be attributed to the nature of the questions and the type of creativity involved.

  1. Imaginative Story (Unicorn and Narwhal):
    • Structured Narrative: This task involves a more structured and specific scenario, where the setting and characters (unicorn and narwhal) provide clear boundaries.
    • Predictable Outcomes: The typical endings for a charming story about a unicorn and a narwhal meeting are generally positive and predictable, leading to less variation even at higher temperature settings.
  2. Global Warming Solutions:
    • Complex Issue: This is a complex, real-world problem with numerous potential solutions and a broader range of interpretations.
    • Open-Ended Nature: The responses can vary significantly, especially at higher temperature settings, because the range of plausible and implausible ideas is much wider.

In summary, the story prompt had a more limited scope for creativity due to its structured nature and expected positive outcome, whereas the global warming prompt allowed for a broader spectrum of responses due to its complexity and the many possible angles from which the problem can be approached.

Seems like a pretty good answer! Of course, when you ask an LLM why it responded the way it did, the explanation it gives you may have nothing to do with what actually went on. For example, if you ask it why it says the Earth is round, it will give you a excellent reasons, but it didn’t use those reasons to compose its response. It did it the same way it made up the story about the unicorn and narwahl: using the statistical relationships among words it only knows as randomly assigned numbers (= tokens).

Tweet
Follow me

Categories: ai, machine learning Tagged with: 2 • ai • chaos • creativity • machine learning • randomness • uncat Date: July 11th, 2024 dw

Be the first to comment »

May 20, 2024

What data would ChatGPT add to train a weather model

My prompt

If you were going to train a model to predict rain showers, what data might you use that is currently not being widely used?

ChaptGPT 4o

Many advanced weather models incorporate a variety of data sources, but some of the more innovative and less traditional data types mentioned are not commonly used in current weather prediction models. Here are some of the data types from the previous list that are typically underutilized or not used at all:

Environmental and Geographical Data

  1. Soil Moisture Levels:
    • While some models use soil moisture data, it is not as widely incorporated or detailed as it could be.
  2. Vegetation and Land Cover Data:
    • Detailed vegetation and land cover data are often not integrated into weather prediction models at the level of granularity that might be beneficial.
  3. Surface Roughness:
    • Data on surface roughness is generally underutilized in most weather prediction models.

Atmospheric Data

  1. Aerosol Concentrations:
    • Detailed aerosol concentration data, particularly on a fine scale, is not commonly used in many weather prediction models.
  2. Atmospheric Composition:
    • Comprehensive data on various gases beyond the primary greenhouse gases (like ozone and carbon dioxide) are generally not integrated into weather prediction models.
  3. Cloud Microphysics:
    • Detailed cloud microphysics data, such as cloud droplet size distributions and ice crystal concentrations, are typically underutilized.

Human and Technological Data

  1. Pollution Levels:
    • Pollution data, including detailed air quality indices, are not commonly used in weather prediction models.
  2. Traffic and Transportation Data:
    • Vehicle emissions and heat data from traffic and transportation are rarely used in current models.

Temporal and Historical Data

  1. Real-time Social Media and Crowd-sourced Data:
    • While there are some experimental uses, real-time social media and crowd-sourced weather observations are not widely used in standard weather models.

Biological and Ecological Data

  1. Plant Phenology:
    • Data on plant life cycles and their impact on local humidity and temperature are generally not used in weather prediction models.
  2. Animal Movement Patterns:
    • Animal behavior in response to atmospheric conditions is not utilized in current weather models.

Advanced Technological Data

  1. LiDAR Data:
    • While LiDAR is used in some atmospheric research, it is not commonly integrated into standard weather prediction models.

Integration of Multidisciplinary Data

  1. Health and Epidemiological Data:
    • Data on health conditions correlated with atmospheric changes are generally not used in weather prediction models.
  2. Economic Activity Data:
    • Information on industrial activity and energy consumption is typically not integrated into weather prediction models.

Summary

To improve weather prediction, especially for phenomena like rain showers, integrating these underutilized data sources could provide a more comprehensive and accurate understanding of atmospheric processes. Leveraging innovative data sources such as aerosol concentrations, detailed cloud microphysics, and real-time social media observations could enhance the predictive power of weather models.

Tweet
Follow me

Categories: ai Tagged with: ai • complexity • knowledge • machine learning • weather Date: May 20th, 2024 dw

Be the first to comment »

September 18, 2023

Candidate simulator

http://chat2024.com is a candidate simulator that lets you chat with them to get their positions, in a good simulation of their style of speech. In a quick session this morning it seemed ok at that. It even responded appropriately when I challenged “Biden” about shipping cluster munitions to Ukraine.

It did an appropriate job when I chatted with “Tr*mp” about his economic record, faithfully repeating his lies in a very Tr*mpian cadence.

And when I asked TFG about how often he attends church, it bobbed and weaved appropriately, saying that how often he goes doesn’t matter. What matters is how he upholds Christian values, including redemption and forgiveness. When I asked “him” how his “I am your retribution” promise squares with that, “he” explained it as standing up for the American people in a battle against the bureaucrats, etc. Fine.

But when I took one step further down the conversational path and asked “him” where the Bible talks about redemption and forgiveness, “he” quoted Ephesians and The First Epistle of John. That is not in the least plausible for President Two Corinthians.

So, yes, Chat2024 is a useful tool for getting quick responses to straightforward questions about candidates’ positions, expressed in their way of speaking.

But, if you use it for what chat AI is designed for — chatting — it is capable of quickly falling into misleading answers, attributing to candidates not what they say or would say, but what their Large Language Model knows independent of the candidates.

That makes Chat2024 dangerous.

Tweet
Follow me

Categories: ai, politics Tagged with: ai • chatai • culture • machine learning • politics Date: September 18th, 2023 dw

6 Comments »

December 4, 2022

Computers inside computers inside computers…

First there was the person who built a computer inside of Minecraft and programmed it to play Minecraft. 

Now Frederic Besse built a usable linux terminal in GPTchat — usable in that it can perform systems operations on a virtual computer that’s also been invoked in (by? with?) GPTchat. For example, you can tell the terminal to create a file and where to store it in a file system that did not exist until you asked, and under most definitions of “exist” doesn’t exist anywhere.

I feel like I need to get a bigger mind in order for it to be sufficiently blown.

(PS: I could do without the casual anthropomorphizing in the GPT article.)

Tweet
Follow me

Categories: ai, machine learning, philosophy Tagged with: ai • gpt • language models • machine learning • philosophy Date: December 4th, 2022 dw

Be the first to comment »

January 31, 2022

Meaning at the joints

Notes for a post:

Plato said (Phaedrus, 265e) that we should “carve nature at its joints,” which assumes of course that nature has joints, i.e., that it comes divided in natural and (for the Greeks) rational ways. (“Rational” here means something like in ways that we can discover, and that divide up the things neatly, without overlap.)

For Aristotle, at least in the natural world those joints consist of the categories that make a thing what it is, and that make things knowable as those things.

To know a thing was to see how it’s different from other things, particularly (as per Aristotle) from other things that they share important similarities with: humans are the rational animals because we share essential properties with other animals, but are different from them in our rationality.

The overall order of the universe was knowable and formed a hierarchy (e.g. beings -> animals -> vertebrates -> upright -> rational) that makes the differences essential. It’s also quite efficient since anything clustered under a concept, no matter how many levels down, inherits the properties of the higher level concepts.

We no longer believe that there is a perfect, economical order of things. “We no longer believe that there is a single, perfect, economical order of things. ”We want to be able to categorize under many categories, to draw as many similarities and differences as we need for our current project. We see this in our general preference for search over browsing through hierarchies, the continued use of tags as a way of cutting across categories, and in the rise of knowledge graphs and high-dimensional language models that connect everything every way they can even if the connections are very weak.

Why do we care about weak connections? 1. Because they are still connections. 2. The Internet’s economy of abundance has disinclined us to throw out any information. 3. Our new technologies (esp. machine learning) can make hay (and sometimes errors) out of rich combinations of connections including those that are weak.

If Plato believed that to understand the world we need to divide it properly — carve it at its joints — knowledge graphs and machine learning assume that knowledge consists of joining things as many different ways as we can.

Tweet
Follow me

Categories: abundance, big data, everyday chaos, everythingIsMiscellaneous, machine learning, philosophy, taxonomy, too big to know Tagged with: ai • categories • everythingIsMiscellaneous • machine learning • meaning • miscellaneous • philosophy • taxonomies Date: January 31st, 2022 dw

3 Comments »

November 15, 2021

Dust Rising: Machine learning and the ontology of the real

Aeon.co has posted an article I worked on for a couple of years. It’s only 2,200 words, but they were hard words to find because the ideas were, and are, hard for me. I have little sense of whether I got either the words or the ideas right.

The article argues, roughly, that the sorts of generalizations that machine learning models embody are very different from the sort of generalizations the West has taken as the truths that matter. ML’s generalizations often are tied to far more specific configurations of data and thus are often not understandable by us, and often cannot be applied to particular cases except by running the ML model.

This may be leading us to locate the really real not in the eternal (as the West has traditional done) but at least as much in the fleeting patterns of dust that result from everything affecting everything else all the time and everywhere.

Three notes:

  1. Nigel Warburton, the philosophy editor at Aeon, was very helpful, as was Timo Hannay in talking through the ideas, and at about a dozen other people who read drafts. None of them agreed entirely with the article.

2. Aeon for some reason deleted a crucial footnote that said that my views do not necessarily represent the views of Google, while keeping the fact that I am a part time, temporary writer-in-residence there. To be clear: My reviews do not necessarily represent Google’s.

3. My original first title for it was “Dust Rising”, but then it became “Trains, Car Wrecks, and Machine Learning’s Ontology” which i still like although I admit it that “ontology” may not be as big a draw as I think it is.

Tweet
Follow me

Categories: ai, machine learning, philosophy Tagged with: ai • everydaychaos • machine learning • philosophy Date: November 15th, 2021 dw

Be the first to comment »

February 28, 2021

The Uncanny Stepford Valley

You’ve probably heard about MyHeritage.com‘s DeepNostalgia service that animates photos of faces. I’ve just posted at Psychology Today about the new type of uncanniness it induces, even though the animations of the individual photos I think pretty well escape The uncanny Value.

Here’s a sample from the MyHeritage site:

And here’s a thread of artworks and famous photos animated using DeepNostalgia that I reference in my post:

https://t.co/MDFSu3J0H1 has created some sort of animate your old photos application and I’m of course using it to feed my history addiction.
I apologise in advance to all the ancestors I’m about to offend.

Very fake history.

I’m sorry Queenie. pic.twitter.com/2np437yXyt

— Fake History Hunter (@fakehistoryhunt) February 28, 2021

More at Psychology Today …

Tweet
Follow me

Categories: ai, culture, machine learning, philosophy Tagged with: ai • entertainment • machine learning • philosophish • uncanny valley Date: February 28th, 2021 dw

Be the first to comment »

July 23, 2020

Getting beneath the usual machine learning metaphors: A new podcast

Google has just launched a new podcast that Yannick Assogba (twitter: @tafsiri) and I put together. Yannick is a software engineer at Google PAIR where I was a writer-in-residence for two years, until mid-June. I am notably not a software engineer. Throughout the course of the nine episodes, Yannick helps me train machine learning models to play Tic Tac Toe and then a much more complex version of it. Then our models fight! (Guess who wins? Never mind.)

This is definitely not a tutorial. We’re focused on getting beneath the metaphors we usually use when talking about machine learning. In so doing, we keep coming back to the many human decisions that have to be made along the way.

So the podcast is for anyone who wants to get a more vivid sense of how ML works and the ways in which human intentions and assumptions shape each and every ML application. The podcast doesn’t require any math or programming skills.

It’s chatty and fun, and full of me getting simple things wrong. And Yannick is a fantastic teacher. I miss seeing him every day :(

All nine episodes are up now. They’re about 25 mins each. You can find them wherever you get your podcasts, so long as it carries ours.

Podcast: https://pair.withgoogle.com/thehardway/

Two-minute teaser:  https://share.transistor.fm/s/6768a641

Tweet
Follow me

Categories: misc Tagged with: ai • everydaychaos • machine learning • podcast Date: July 23rd, 2020 dw

4 Comments »

January 28, 2020

Games without strategies

Digital Extremes wants to break the trend of live-service games meticulously planning years of content ahead of time using road maps…’What happens then is you don’t have a surprise and you don’t have a world that feels alive,’ [community director Rebecca] Ford says. ‘You have a product that feels like a result of an investor’s meeting 12 months ago.'”

— Steven Messner, “This Means War,” PC Gamer, Feb. 2020, p. 34

Video games have been leading indicators for almost forty years. It was back in the early 1980s that games started welcoming modders who altered the visuals, turning Castle Wolfenstein into Castle Smurfenstein, adding maps, levels, cars, weapons, and rules to game after game. Thus the games became more replayable. Thus the games became whatever users wanted to make them. Thus games — the most rule-bound of activities outside of a law court or a tea ceremony — became purposefully unpredictable.

Rebecca Ford is talking about Warframe, but what she says about planning and road maps points the way for what’s happening with business strategies overall. The Internet has not only gotten us used to an environment that is overwhelming and unpredictable, but we’ve developed approaches that let us leverage that unpredictability, from open platforms to minimum viable products to agile development.

The advantage of strategy is that it enables an organization to focus its attention and resources on a single goal. The disadvantages are that strategic planning assumes that the playing field is relatively stable, and that change general happens according to rules that we can know and apply. But that stability is a dream. Now that we have tech that lets us leverage unpredictability, we are coming to once again recognize that strategies work almost literally by squinting our eyes so tight that they’re almost closed.

Maybe games will help us open our eyes so that we do less strategizing and more playing.

Tweet
Follow me

Categories: business, everyday chaos, games Tagged with: everydaychaos • future • games • internet • machine learning • strategy Date: January 28th, 2020 dw

Be the first to comment »

Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!