There are some scientific and medical scenarios (e.g. particle detection) that require instruments made with steel with as little radioactive contamination as possible. This is challenging as atomic-bomb testing during the 20th century resulted in persistent airborne nuclear fallout, and so there’s apparently some industry in salvaging steel from ships which sank prior to the Trinity tests. This is known as “Low-background steel.“
“AI Slop” is an increasingly popular term to describe the growing amount of AI-generated content on the web, much of it intended to draw page views, spread falsehoods, and the like.
Last week, I learned of the “Low-background Steel” blog which is dedicated to sources of data from before the end of 2022 (because ChatGPT premiered in November of that year). At the moment, it only has a half-dozen entries.
I recently encountered a list of “Falsehoods programmers believe about time.” With the possible exception of the one about black holes (third from the end), they’re all false.
“There are always 24 hours in a day”? Not when we “spring forward” to Daylight Saving Time; that day only has 23 hours. Likewise, when we “fall back” to Standard Time, that day has 25 hours. Of course, even those rules aren’t true if you live in a region that doesn’t observe Daylight Saving Time.
Even Google has trouble with this. Monday morning, at 9:07 AM Eastern Daylight Time, I asked Google what time zone Colorado is in. The response was Mountain Time, which is what I was expecting.
Google then offered a little extra information. In Maryland, it was 9:07 AM Monday morning. But in Colorado, according to Google, it was 8:46 AM on Sunday.
That time zone offset is a bit more complicated than I expected!
As part of the book tour for his new Book, Class Clown: The Memoirs of a Professional Wiseass, Dave Barry sat for an interview at a Microsoft in-house event. The interviewer was Scott Hanselman, who is also Microsoft’s Vice President of Developer Community (so, a “tech-influencer”).
It’s one of the funniest interviews I’ve heard recently. The spleen story alone is enough to make it worth your time. And now you’ll have to play it too, if for no other reason than to find out about this “spleen story.”
Steve was a little confused by this mention of entering a two-digit code, but it was something I immediately recognized, as did co-host Leo Laporte.
Here’s a clip from this week’s show (episode 1026).
If you didn’t click the video, you’re depriving yourself of the chance to hear Steve Gibson reading an email I sent him.
Hi Steve,
I just listened to episode 1025 in which you read a bit of listener feedback that left you perplexed about Microsoft’s Authenticator app needing you to type in a two-digit number. I use Microsoft’s products in an enterprise environment and thought I might be able to shed some light on this.
What’s going on is that Microsoft offers the option of using a push notification instead of the TOTP (in the enterprises I’m familiar with allow you to use either as a second factor).
The problem with push notifications is, of course, “notification fatigue.” People get used to seeing the notification and just click “Yes, it’s me” without thinking it through. (So if someone figures out your password, and you blindly confirm it…. I’m sure you see where that’s going.)
To counter this, when you log in to a Microsoft system that uses push notifications, they display a two digit number. You then have to enter that number into the pop-up from the authenticator app. That way, it’s much more difficult for an end-user to accidentally confirm a third-party’s login attempt.
I tend to take a lot of somewhat random photos of things that interest me. If it’s a good photo and the mood strikes, I’ll share it on various social media channels.
One image I recently shared featured a stack of “Wooden Bitcoin” with a joke about their value (there are certain scenarios where the wooden ones are quite valuable).
When I shared the image on social media (and when I embedded it above), I included an image description for accessibility purposes.
A small pile of wooden disks on a silver background. An outer circle on each disk reads “Wooden Bitcoin. Just as Good, Really.” Text on an inner circle reads “To Redeem: Insert into any Internet Coin Slot.” The Bitcoin logo appears at the center of each disk.
That description wasn’t very hard to come up with, but sometimes, it’s a bit more difficult to convey all the visual details. For example, consider this image of winterberries.
In the embedded image’s alt text I’ve described it as “Bright red winterberries on bare branches” which is fairly accurate, though it doesn’t offer a lot of detail.
Much like everyone else, I’ve been getting my feet wet with AI. Lately, I’ve been using Ollama to run models locally for various toy projects; and more recently, I noticed that Meta’s llama3.2-vision model had become available, allowing the ability to work with image files.
Uploading the winterberry image, I gave llama the prompt:
Please describe this image.
What I got back was quite a bit more detailed than what I’d come up with on my own.
This image presents a close-up view of a vibrant red berry bush, showcasing its intricate details.
The bush’s branches are bare and grayish-brown in color, with numerous small twigs extending from the main stems. The berries themselves are bright red, spherical in shape, and densely packed along the branches. They appear to be ripe, as they have lost their green hue and turned a deep crimson color.
In the background, other bushes or plants can be seen, although they are not clearly visible due to the dense foliage of the berry bush. The overall atmosphere suggests that this image was captured during the winter season, likely in December or January, given the bare branches and ripe berries.
That description pretty well knocks it out of the park! And although the image contains no Exif data, the AI correctly determined that the photo was taken “…in December or January” (December 31 in fact, right on the cusp of the change).
For comparison, I uploaded the same image to ChatGPT, using the current “GPT-4o mini” model. Entering the same prompt, I got a similarly detailed, albeit shorter, description.
This image shows branches covered with clusters of bright red berries. The berries are small, smooth, and shiny, growing densely along the thin, leafless twigs. These are likely the fruits of a holly plant, possibly winterberry (Ilex verticillata), which is known for its vibrant red berries that remain on the branches during winter, adding color to the landscape. The background is blurred, emphasizing the striking red berries in the foreground.
Again, that’s a great description. No guesses on when the photo was taken, but this time, ChatGPT correctly recognized that these were winterberries.
In the case of llama 3.2, I’d likely remove the guess about when the photo was taken since I know that information. For ChatGPT, I’d take out the guess work about the type of berries since I already know it. But to be honest, either of these outputs would be good starting points for describing the photo.
So, what about that first photo with the wooden bitcoins?
Here’s the llama output. It’s wordy and repeats some the details multiple times. And I disagree with its description of the Bitcoin logo as the curved lines form the letter “B” and not a circle.
The image shows a collection of wooden coins with the Bitcoin logo and text on them, likely used as tokens or rewards in an online game or platform.
The coins are made of wood and have a circular shape.
They feature the Bitcoin logo in the center, which is a stylized letter “B” made up of two curved lines that form a circle.
The text on the coins reads “WOODEN BITCOIN” at the top and “JUST AS GOOD, REALLY” at the bottom.
There are 15 coins visible in the image, arranged randomly on a gray surface.
Some of the coins overlap each other, creating a sense of depth and layering.
The background is a plain gray color, which helps to focus attention on the coins.
Overall, the image suggests that these wooden coins are being used as a novelty item or collectible, perhaps in conjunction with an online game or platform that rewards players with virtual currency.
Chat GPT did better on this one.
This image shows a pile of wooden tokens designed to resemble bitcoins. Each token is circular and engraved with text and imagery related to Bitcoin. The engraving includes the Bitcoin logo (a “B” with two vertical lines) and phrases such as “Wooden Bitcoin,” “Just As Good, Really,” and “To Redeem: Insert into any internet coin slot.” The tokens are humorously crafted to mimic physical representations of digital currency.
In this case, Chat GPT clearly did the better job, and both tools did a credible job of recognizing text presented in an unusual format, but I’m not sure I’d want to trust either one to generate text without reviewing it before sending the text into the world.
This is a great step toward solving the “blank page problem”, most likely giving a solid starting point toward a description of the image (it’s also a huge step up from xkcd’s “Tasks” cartoon).
At the moment, my phone’s camera roll has nearly 10,000 photos on it (I really do take a lot of somewhat random photo). Being able to run them all through AI has the potential to save a lot of tedium with creating descriptions. But, along with the need to double-check the descriptions, I’m also still on the the hook for deciding whether the image is even worth sharing.
Using an AI tool definitely has potential to save me some time. But it’s not a panacea.
Initially, you could just go to https://www.linkedin.com/mypreferences/d/settings/data-for-ai-improvement to turn off the setting, but clicking that link doesn’t seem to be reliable. (If I go directly to that link, it works, but if it goes through Facebook’s link tracker, it goes to a page not found error.)
In a desktop web browser, you can click on “View Settings” in the left navigation, click on “Data Privacy” and then, under the “How LinkedIn uses your data” heading, click on “Data for Generative AI Improvement.” At the moment, that page has a single toggle for “Use my data for training content creation AI models.”
I don’t have the app installed myself, but I’m told you can similarly go to Settings -> Data Privacy -> How LinkedIn Uses Your Data -> Data for Generative AI Improvement
I have a habit where I’m about to type a control key (e.g. control-c), I’ll hit the control key twice. I don’t know why I do this, and I’m not sure why it’s only the control key, but on Mac, this has the unwanted side-effect of popping up a prompt to enable dictation. (If I had dictation enabled, I suspect it would start transcribing my speech, which might be even worse.)
To turn this off:
Go to system preferences and scroll down to “Keyboard.”
At the bottom of the keyboard pane, is the “Shortcut” label. This shows the current hotkey for activating dictation. I really don’t want this hotkey, but there doesn’t seem to be an option to not have one, so I’ll choose the microphone and hope it doesn’t start transcribing every random conversation near the MacBook.
I’m running a WordPress instance on a Zimaboard on the home network. Normally, that’s the kind of thing I’d put on the paid hosting service so I can let someone else worry about patching the OS and such, but since family calendars and the like don’t need to be on the public internet, I decided to do this one in-house.
Once I got WordPress up and running, I checked the “Site Health” and along with some old themes that needed to be cleaned up, there was a notice of a critical issue, telling me that “One or more required modules are missing.”
Expanding the dropdown, I saw the list included curl, imagick, zip, and gd. I tried to be thorough when installing PHP, but evidently missed a few. No biggie. Here’s how to fix it.
This is all running on Ubuntu, so the first step is to update the list of available packages. Because you always do that first.
$ sudo apt update -y
Next, install the missing packages. These are php modules, so the package to install is named “php-” and then the module name. (e.g. php-curl).
My machine is running PHP 8.1, so apt determined that the correct packages to install were php8.1-gd, php8.1-imagick, php8.1-curl, and php8.1-zip
Now I know the php-imagick module depends on ImageMagick, so I wanted to make sure that was installed, so after checking the apt command’s help text, I ran apt list -a ImageMagick
$ apt list -a ImageMagick
Listing... Done
imagemagick/jammy-updates,jammy-security,now 8:6.9.11.60+dfsg-1.3ubuntu0.22.04.3 amd64 [installed]
imagemagick/jammy 8:6.9.11.60+dfsg-1.3build2 amd64
Excellent. Everything should be good now right? That’s what I thought, but WordPress disagreed. Returning the Site Health page, the same message appeared, telling me that “One or more required modules are missing.”
I had to scratch my head for a bit on that one. Then I remembered, you don’t just install PHP modules, you also have to tell PHP you want to use them. (For example, Xdebug notoriously causes programs to run more slowly.)
I keep losing track of the file where I have these steps written down. It’s past time to put them someplace where I can find them, and perhaps help a few others as well.
create database SOME_DATABASE;
create user 'SOME_USER'@'localhost' IDENTIFIED BY 'A_STRONG_PASSWORD';
GRANT ALL ON SOME_DATABASE.* TO 'SOME_USER'@'localhost';
The database permissions grant can be fine-tuned a bit, e.g. after installation, remove the DROP, ALTER, and GRANT permissions. (This does, of course, depend on what your plugins are doing, and potentially the needs of a particular major version upgrade.)
Today I learned you can set default output options for the dig command by creating a .digrc file in your home directory.
Ordinally, running the command dig www.chaosandpenguins.com, the result is this rather hefty block of text.
$ dig www.chaosandpenguins.com
; <<>> DiG 9.16.1-Ubuntu <<>> www.chaosandpenguins.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40732
;; flags: qr rd ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;www.chaosandpenguins.com. IN A
;; ANSWER SECTION:
www.chaosandpenguins.com. 0 IN CNAME chaosandpenguins.com.
chaosandpenguins.com. 0 IN A 216.92.152.175
;; Query time: 0 msec
;; SERVER: 172.28.224.1#53(172.28.224.1)
;; WHEN: Wed Nov 16 23:13:00 EST 2022
;; MSG SIZE rcvd: 136
That’s a whole lot of text. So let’s add a couple options. +noall turns off everything. Running dig www.chaosandpenguins.com +noall would literally return nothing at all. To bring back the answer section (which is what I’m interested in most of the time), you add the +answer option.
$ dig www.chaosandpenguins.com +noall +answer
www.chaosandpenguins.com. 0 IN CNAME chaosandpenguins.com.
chaosandpenguins.com. 0 IN A 216.92.152.175
That’s much more compact , but getting it requires some extra typing. And I want that version of the output most of the time, so wouldn’t it be nice if there was to make that the default?
This is where the .digrc file comes in. You create it in your home directory and just put in a single line containing the options you want. So, to make +noall +answer the defaults, I just run this command:
$ echo +noall +answer > ~/.digrc
And now when I run dig www.chaosandpenguins.com without any options, here’s the default output:
$ dig www.chaosandpenguins.com
www.chaosandpenguins.com. 0 IN CNAME chaosandpenguins.com.
chaosandpenguins.com. 0 IN A 216.92.152.175