Refining The Skunk

/images/_0481dc94-e6c6-4cf0-a35c-6185fbce8875.jpeg

In a previous post, I worked out the general process by which I’ll be creating the daily challenge bag names for Maranginator, but now I have to sketch out an implementation with specific tools, algorithms, and processes. Furthermore, the Maranga newsletter is just one of several content streams I’m envisioning, and I want this system to be able to handle all of them.

◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇◆◇

Let’s start by looking at the feeds I am currently contemplating:

  1. Maranga daily game bags

  2. Daily Ogg videos

  3. Norwegian cartoon of the day

For both the videos and the cartoons, there is already a large (hundreds) collection of available items, with more scheduled for production (in occasional batches) into the foreseeable future. The processes are quite different for each, so there will be no attempt to unify the creation processes themselves, but if I’m going to feed the results into daily posting streams, it would certainly simplify the design of happyskunk if those content archives can be organized consistently.

Maranga Bags


There is currently no existing database, but after having created a few examples, I think it’s safe to say that each entry will have the following elements:

  • the bag name itself, in the form of a short text string

  • a fun thumbnail image, generated via the bag name

  • an opening rack image, showing what the first letters of the game will be, so that players can be confident they are playing the same game as everyone else

That opening rack could be provided as a simple text string, but there’s something more reassuring about seeing an actual screen-grab of the game in action with that particular bag name.

I may also decide to merge the thumbnail, the bagname, and the opening rack into a single image. If that included a URL, QR-code, or other actionable call, it might be a very powerful, highly shareable device.

Ogg Videos


Data for this collection is currently in a MementoDB database of scripts, with keywords and (in the case of already released episodes) a link to the video on YouTube. There’s also an existing online index of them, for which I created a Memento-to-CSV converter, which drives the index server. I won’t be using any of the server stuff, but I can certainly use the CSV converter, if that ends up being useful.

As for the content, there is currently a title and a YT link, although I could also pull in the thumbnail images, if that helps unify the feed system.

Unforgettable Cartoons


The cartoons are currently managed within an Anki flashcard system. I wrote them all as part of my daily language-learning exercises, so it made sense to capture them in the same educational tool I was using for study. To produce the books, that database gets dragged through several different representations (namely CSV and Sqlite3), so I could realistically tap any of those for feeding happyskunk.

Actual content here is the cartoon image, the caption, and the keyword. Again, these could also be unified into a single image.

Just Images

===========

So is it possible that for every one of these feeds, I could simply have a basket of images that are self contained? I could easily push the associated URL into the metadata, I think, and then extract it from there when a candidate image gets used in its feed.

Doing so would also minimize the data management headache. No databases to build, manage, or maintain - just a folder full of candidates waiting for their moment in the sun. Happyskunk would simply pick a file at random from the pool, move it into the hugo system’s static assets, which don’t get rebuilt every time the site is regenerated.

But rather than generate the markdown post .md file when I move the image, I think I’ll leave that to the shadowmaker process. Shadowmaker is already transforming files from other places (Obisidan) to generate the post files. I can just add the static image subfolders (static/dailyimages/maranginator, static/dailyimages/unforged, static/dailyimages/oggodex) to the shadowmaker, and it will know how to pull metadata from the new images there and generate the .md file.

ProjTags

Since I don’t want to have to change the site generation code every time I add a new daily feed, the name of the folder within dailyimages will be taken as the name of the projtag to which the created entry will be posted. Note that if such a project has not been created, the daily posts will still get created, but they will not be accessible from the site’s navigation system, since there will be no landing page for that stream.

Candidate Pools

Since none of these schemes will require project-specific knowledge to handle the image feeds, the candidate pools can all be managed easily by creating a dailyfeedstocks/ folder, with a subfolder for each feed: e.g. maranginator, unforged, and oggodex.

Manual Override

As mentioned elsewhere, I want to be able to manually override the content for a specific day, and ideally, to be able to do so long in advance, so I don’t run into any game-day race conditions.

To solve that, image files placed into the static/dailyimages folder will be named by posting date. So the Ogg video that gets posted for Dec 3, 2025 will be at static/dailyimages/oggodex/2025-12-03.jpg. When happyskunk processes the feeds and promotes candidate images, it will first look to see if a candidate named with today’s date exists. If so, that image is promoted. Otherwise, one of the other images, that does not have a date-style filename, will be promoted instead.

With this scheme, it is not only easy to schedule posts long in advance, but it’s also easy to see which ones are pre-scheduled and which are not, just from the distinctly different file name patterns.


Read More


/images/_e1b23d38-68ca-45eb-bf1b-56bd12ad0ce3.jpeg

Obsidian-fu

Refactoring the shadowmaker has become a bigger headache than I had originally anticipated, but it’s for the long-term health of the system, so I’m sticking to my guns. This weekend added further drama when I finally stopped running away from frontmatter and embraced it for all my metadata. Sure, scattering #ch-command directives throughout the body of the notes was insane, but fixing it is going to mean more than just adding a few metadata fields. I may have to completely change the way I use Obsidian.

/images/_e42c8a8a-b127-431f-b414-425c5d17a2dd.jpeg

Ontology-2.0

While trying to integrate the many episodes of CaveTV into the site, I realized that the ontology was getting cramped. It needs to be revised to better distinguish between internal projects, external brand identities, multiple deliverables within a brand, and distinct showrooms.

What follows is the scheme we devised for what the abstractions are, how they should be tagged in Obsidian, and how the files will be managed within Hugo.

/images/_2ef531ec-bc45-46fe-841b-6864301fa06c.jpeg

Cutting The Monster Into Pieces

Now that I’ve identified a useable hosting candidate, my final test of their service will be to roll out a full implementation of the websmith deployment scheme. But in contemplating how I’m going to do that, I’ve realized that I may not have broken the project into distinct repos properly. So I’m going to figure it out by explaining it to the rubber duck. (Meaning you. :-)