Happy Skunk Lifts Dishwasher

/images/_79829247-038e-402a-9ed9-36ade90322a1.jpeg

This guy looks happy, and he should, because he’s the newly selected poster boy for the system that will generate new Maranginator game bags. Confused? Me too. Come on inside and let’s figure out what I’m talking about.

◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇

First off, let me lay out the problem constraints:

  1. If I’m going to propose a new Maranga puzzle every day, I need to figure out what bag we should be playing. Every day.

  2. I don’t want those bag names to feel like they’re all coming from some kind of generative template. That would be boring.

  3. I need to be certain that my method isn’t going to produce any “unfortunate” phrases.

  4. I don’t want the system to need any smarts about grammar, but I also don’t want to choose anything that sounds grammatically awkward.

  5. Each phrase should be reasonably evocative, such as our happy skunk above. (This will let me generate an accompanying image for the post.)

  6. The phrases should be short, because in some cases at least, people may have to type them in by hand.

  7. Each phrase absolutely must be unique because we don’t want to repeat puzzles.

  8. Each phrase must produce a solvable bag. Most phrases already do, but about 1% produce an unplayable opening rack, so I need to check that before I issue a daily challenge.

  9. I need to be able to easily override the generator so I can seed my own phrase when it feels appropriate. Like when a major news story happens, or maybe when I’ve got a product launch that I want to reference.

Given those guiding parameters, I don’t think I can expect to write a generating algorithm that can also police itself against all these criteria. Anything simple, that uses a limited vocabulary and grammar to avoid embarrassing phrases, will quickly feel formulaic, and anything with a more open vocabulary and grammar will need to be policed for objectionable content.

So my first decision is that I’m going to have to generate these in batches and police the language part myself. (I’ve already written a tool that can play each game bag and confirm that it’s solvable.)

And since I’m going to be working with batches of phrases, I don’t need to pick just one way of creating them. I can ask AI to generate a bunch, and I can also pull in lists of book titles, and poems, and famous quotations…

In fact, the more I think about it, the more this is beginning to sound like a fun exercise. I wonder how long it would take me to write a year’s worth of suitable phrases? Maybe I should figure that part out before I worry about how the list will be processed. Even if I find it too laborious to do manually, the exercise should give me some insights into other ways I might be able to generate them.

Stay tuned for the next #happyskunk update. :-)


Read More


/images/_c70b8a42-b32d-42b6-a62f-22e32b947e62.jpeg

Spinners of the Web

The internet is full of advice on how to spot AI-generated images, videos, articles, newscasts, etc. But IMO, that’s entirely the wrong conversation.

Forget “How do we spot them?” We need to be talking about how society is going to function when the answer is: You can’t.

/images/_e1b23d38-68ca-45eb-bf1b-56bd12ad0ce3.jpeg

Obsidian-fu

Refactoring the shadowmaker has become a bigger headache than I had originally anticipated, but it’s for the long-term health of the system, so I’m sticking to my guns. This weekend added further drama when I finally stopped running away from frontmatter and embraced it for all my metadata. Sure, scattering #ch-command directives throughout the body of the notes was insane, but fixing it is going to mean more than just adding a few metadata fields. I may have to completely change the way I use Obsidian.

/images/_e42c8a8a-b127-431f-b414-425c5d17a2dd.jpeg

Ontology-2.0

While trying to integrate the many episodes of CaveTV into the site, I realized that the ontology was getting cramped. It needs to be revised to better distinguish between internal projects, external brand identities, multiple deliverables within a brand, and distinct showrooms.

What follows is the scheme we devised for what the abstractions are, how they should be tagged in Obsidian, and how the files will be managed within Hugo.