TIL About Origins of Moylan Arrow

One rainy day 40 years ago, Moylan was headed to a meeting across Ford’s campus and hopped in a company car. When he saw the fuel tank was nearly empty, he stopped at a gas pump. What happened next is something that’s happened to all of us: He realized that he’d parked on the wrong side.

Unlike the rest of us, he wasn’t infuriated. He was inspired. By the time he pulled his car around, he was already thinking about how to solve this everyday inconvenience that drives people absolutely crazy. And because the gas pump wasn’t covered by an overhead awning, he was also soaking wet. But when he got back to the office, Moylan didn’t even bother taking off his drenched coat when he started typing the first draft of a memo.

“I would like to propose a small addition,” he wrote, “in all passenger car and truck lines.” The proposal he had in mind was a symbol on the dashboard that would tell drivers which side of the car the gas tank was on. […]

As soon as they read his memo, they began prototyping his little indicator that would be known as the Moylan Arrow. Within months, it was on the dashboard of Ford’s upcoming models. Within years, it was ripped off by the competition. Before long, it was a fixture of just about every car in the world.

“Society loves the founder who builds new companies, like Henry Ford,” Ford CEO Jim Farley told me. “I would argue that Jim Moylan is an equally compelling kind of disrupter: an engineer in a large company who insisted on making our daily lives better.” These days, there are two types of drivers: the ones aware of the Moylan Arrow and the ones who get to find out.

The Genius Whose Simple Invention Saved Us From Shame at the Gas Station

Pi - multi LLM CLI coding agent

Mario Zechner has launched pi, a multi LLM provider CLI coding agent.

Here’s a post about how they built it and some of the design decisions behind it.

The most interesting is how the tool doesn’t include a lot of the complex features that are common on these tools now:

pi is opinionated about what it won’t do. These are intentional design decisions to minimize context bloat and avoid anti-patterns.

No MCP. Build CLI tools with READMEs (see Skills). The agent reads them on demand. Would you like to know more? If you must, use mcporter.

No sub-agents. Spawn pi instances via tmux, or build your own sub-agent extension. Observability and steerability for you and the agent.

No permission popups. Security theater. Run in a container or build your own with Extensions.

No plan mode. Gather context in one session, write plans to file, start fresh for implementation.

No built-in to-dos. They confuse models. Use a TODO.md file, or build your ownwith extensions.

No background bash. Use tmux. Full observability, direct interaction.

Perfect retrospective of LLMs in 2025

Over at Simon Willison’s blog, he has posted an amazing retrospective on the evolution of LLMs in 2025.

It’s crazy to think that just last year we were talking about how powerful “prompt engineering” could be, yet no one could have imagined that “agentic” mode would take things to the next level.

styled-components-last-resort

The popular styled-components React library has been in maintenance mode for a while and it was officially sunset earlier this year.

The great folks at Sanity, decided to fix year’s long performance issues, to buy everyone time to plan a migration strategy.

Our forks accept this reality. Install, get the performance boost, keep shipping while you plan your real migration. … Stage 3: Actual recovery (This quarter)

  • Pick your replacement: vanilla-extract, Tailwind, Panda CSS
  • Migrate systematically, component by component
  • Delete styled-components forever

https://www.sanity.io/blog/cut-styled-components-into-pieces-this-is-our-last-resort

Misconceptions about IPO pop

Matt Levine, from Money Stuff newsletter, covers a major misconception about how companies that go public via Initial Public Offerings, leave money on the table when their stock goes up on their first day of trading.

He quotes a paper How much money is really left on the table? Reassessing the measurement of IPO underpricing

IPO issuers are thought to collectively leave billions of dollars “on the table” by underpricing shares relative to the initial trading price. However, this trading price corresponds to relatively small share volume. Because some investors are more optimistic about the shares’ value than others, the trading price exaggerates the maximum feasible IPO price for the larger IPO quantity. We assess the extent of the mismeasurement by introducing a new measure of underpricing that incorporates both share prices and their associated quantities. Using data from 2,937 IPOs from 1993-2023, our evidence suggests that IPOs are underpriced by substantially less than is commonly believed, perhaps up to 40% less.

Quoting Armin on LLMs and understanding type languages

It turns out that LLMs are as adept as writing Typescript as I am. Which is to say, not very good at all.

From Armin’s In Support of Shitty Types, he says:

This gets really bad when the types themselves are incredibly complicated and non-obvious. TypeScript has arcane expression functionality, and some libraries go overboard with complex constructs (e.g., conditional types). LLMs have little clue how to read any of this. For instance, if you give it access to the .d.ts files from TanStack Router and the forward declaration stuff it uses for the router system to work properly, it doesn’t understand any of it. It guesses, and sometimes guesses badly. It’s utterly confused. When it runs into type errors, it performs all kinds of manipulations, none of which are helpful.

I’ve experienced this when having Claude Code write some Typescript. It doesn’t understand it and I have to end up fixing the most obscure errors.

It seems that compiled languages might fared better (it’s been a while since I’ve worked with them):

the types that Go has are rather strictly enforced. If they are wrong, it won’t compile. Because Go has a much simpler type system that doesn’t support complicated constructs, it works much better—both for LLMs to understand the code they produce and for the LLM to understand real-world libraries you might give to an LLM.

Quoting React is Insane

Mario has a great blog post about the history of the web libraries and frameworks and how complex thing are right now. While it ends up focusing on how “not simple” React ended up being, it’s more a commentary of how building web apps is so challenging.

My answer to that question, surprisingly, stops roasting React and goes the opposite way, defending not only React, but also Angular and jQuery and everything that came before them. I think this code is bad because making a interactive UI where any component can update any other component is simply one of the most complicated things you could do in software. … So, this entire rant about React… it’s not even React’s fault. Neither is Angular’s, or jQuery’s. Simply, whichever tech you choose will inevitably crumble down under the impossible complexity of building a reactive UI.

Great long read that brought back so many fond memories about jQuery and Angular 1. But not Angular 2, I will never forgive what they did to it.

Quoting Learnings from two years of using AI tools for software engineering

The Pragmatic Engineer has a guest post from Birgitta Bockeler, Distinguished Engineer at Thoughtworks, where they talk about the evolution of the AI ecosystem in developing software.

It’s a really good overview of how we got from “autocomplete on steroids” in 2021, to “autonomous background agents” in only 4 years.

This really clicked for me: generative AI tooling is not like any other software. To use it effectively, it’s necessary to adapt to its nature and embrace it. This is a shift that’s especially hard for software engineers who are attached to building deterministic automation. It feels uncomfortable and hacky that these tools sometimes work and other times don’t. Therefore, the first thing to navigate is the mindset change of becoming an effective human in the loop.

This is spot on. It’s hard to trust the tool when it doesn’t always work, but Burgitta provides some ways to improve this:

  • Custom instructions about coding style and conventions, tech stack, domain, or just mitigations for common pitfalls the AI falls into.
  • Break down the work first into smaller tasks so that it’s easier execute the right changes in small steps, and you have a chance to review the direction the AI is going in and to correct it early, if needed.
  • It’s much more effective to start new conversations frequently, and not let the context grow too large because the performance usually degrades.
  • A concrete description which will lead to more success is something like, “add a new boolean field ‘editable’ to the DB, expose it through /api/xxx and toggle visibility based on that”.
  • Use a form of memory: A common solution to this is to have the AI create and maintain a set of files in the workspace that represent the current task and its context, and then point at them whenever a new session starts. The trick then becomes to have a good idea of how to best structure those files, and what information to include. Cline’s memory bank is one example of a definition of such a memory structure.

I’ve read this online in multiple places, but it’s great to have it all laid out in one spot.

Will Larson’s idea about advantage of authors in the age of LLMs

Will has an interesting idea about how authors can still thrive in the age of LLMs.

Instead, I’ve been thinking about how this transition might go right for authors. My favorite idea that I’ve come up with is the idea of written content as “datapacks” for thinking. Buy someone’s book / “datapack”, then upload it into your LLM, and you can immediately operate almost as if you knew the book’s content.

This is a not necessarily a new idea. Kevin Rose has an article where he creates custom ChatGPTs based on specific books .

What’s interesting about Will’s idea, is the role of the author. Instead of having people upload your book to a chat bot (which most likely has already been illegally trained on), you provide a legal way of doing that.

What is our competitive advantage as authors in a future where people are not reading our work? Well, maybe they’re still buying our work in the form of datapacks and such, but it certainly seems likely that book sales, like blog traffic, will be impacted negatively.

The Visual World of Samurai Jack

Samurai Jack was one of my favorite shows and this article from Animation Obsessive goes in depth, as to what made it so special (and so avant-garde).

That was a core inspiration for his original Samurai Jack (2001–2004). “There are so many sitcoms, especially in animation, that we’ve almost forgotten what animation was about — movement and visuals,” he told the press after the show debuted. The Samurai Jack crew aimed to “tell the stories visually… tell a very simple story visually.”2 Talking was kept to a minimum. Instead, Samurai Jack would need enough richness and variety in its look and movement (and its filmmaking) to keep people gripped without words.

The very limited talking made it so that you had to pay attention during every single second. I don’t think I’ve ever watched a show, before or after, were that has been the case.

Samurai Jack never reached that level. It was popular, though. And that, according to the cynical view, shouldn’t have been possible. Even with its robots and samurai and demons and zombies, Samurai Jack didn’t really fit with the mass culture of its time.

Time for a rewatch.

Quoting Armin on AI Changes Everything

Do I program any faster? Not really. But it feels like I’ve gained 30% more time in my day because the machine is doing the work. I alternate between giving it instructions, reading a book, and reviewing the changes. If you would have told me even just six months ago that I’d prefer being an engineering lead to a virtual programmer intern over hitting the keys myself, I would not have believed it. I can go can make a coffee, and progress still happens. I can be at the playground with my youngest while work continues in the background. Even as I’m writing this blog post, Claude is doing some refactorings.

Armin Ronacher

Andor Season 2 Behind the Scenes

I always love getting to see how movies and tv shows are made. There’s something about how the work of hundreds/thousands of people gets orchestrated into what we see. I find it fascinating. Anyways, with Andor season 2 wrapped up (I love this show), here are a few links about how it got made:

  • http://www.youtube.com/watch?v=4EWCzic9z_M
  • http://www.youtube.com/watch?v=q2WW2emgxRI
  • http://www.youtube.com/watch?v=mO1FZB-Rnxk
  • http://www.youtube.com/watch?v=PrAWHkGLn7g
  • https://www.youtube-nocookie.com/channel/UCBtHgTjADDZ6D2sFFUyH41A

Five nines availability

Quoting Justin Garrison

It takes you about the same amount of time to read this post as the amount of downtime you’d be allowed per week with 99.999% availability

Quoting Tim Kellog

I predicted that software engineering as a profession is bound to decline and be replaced by less technical people with AI that are closer to the business problems. I no longer think that will happen, but not for technical reasons, but for social reasons.

What Changed

I saw people code.

I wrote the initial piece after using Cursor’s agent a few times. Since then the tools have gotten even more powerful and I can reliably one-shot entire non-trivial apps. I told a PM buddy about how I was doing it and he wanted to try and… it didn’t work. Not at all.

What I learned:

  1. I’m using a lot of hidden technical skills
  2. Yes, anyone can do it, but few will

- Tim Kellog

Apple AppStore commission

Today has been an eventful day for the Apple AppStore. John Gruber has an excellent summary of it, but TLDR; Apple cannot charge a commission to developers, for telling users to purchase an in app purchase or subscription from outside the AppStore.

At Scribd, I work on the team that’s responsible for all things subscriptions. Before that, I dabbled in AppStore payments for Bitwise. So I’ve spent countless hours trying to understand if it was possible for us to use our web payments processor, that we already integrate and support, for everything Apple or Google. And the answer was always no, until today (there was a ruling in 2021 that allowed developers to do this, but Apple’s restrictions effectively rendered it moot).

From a user perspective, I like subscribing via the AppStore. It’s easy to keep track of my subscriptions, I trust my payment information is secure and I like Family Sharing. But as a developer, it’s such a pain sometimes. Documentation isn’t always the best (Google’s is a bit better), customer support is harder since we don’t have a lot of control, and it’s really easy to make a mess of your subscription groups. But the fact that we have to create a specific integration to each AppStore is expensive to do and maintain. It’s much easier for developers to handle subscriptions via the web (there’s plenty of paid or open source platforms that help you with this now, or if you’re adventurous to roll your own), and have all platforms subscribe via it. Then you just have to devote your resources to a single platform.

Plus the commission on every purchase is such a hard pill to swallow. I do think that they should take a cut, the AppStore does deserve it since they provide a platform for subscriptions, purchases and processing payments costs them money. But charging such a high fee (30% for the first year if your business makes more than $1MM) is expensive for a business, on top of the team needed to maintain it.

Either way, I’m excited about this ruling. We as developers and consumers, want more options. We can continue to use AppStores (at a more reasonable fee) or use something else.

The font that rules the world

Amazing post by Marcin Wichary about a 135 year font “called” Gorton (he assumes that’s what it’s called because it doesn’t really have a proper name).

Once you see it, you’ll realized how we use it everywhere.

But what’s outstanding about this post, is the level of dedication Marcin has, to document the history of it. Hats off to you.

Delegating work based on task relevant maturity

I’m not an engineer manager (and I’ve come to realize that I don’t want to be), but one thing I’ve learned is how there’s different ways of supporting engineers.

In a blog post, Ben mentions some learnings about being a new manager, but the part that resonated with me is:

Instead of “don’t micromanage,” the advice I wish I’d gotten is:

  1. Manage projects according to the owner’s level of task-relevant maturity.
  2. People with low task-relevant maturity appreciate some amount of micromanagement (if they’re self-aware and you’re nice about it).

One thing that really helped me calibrate on this was talking about it explicitly. When delegating a task: “Do you feel like you know how to do this?” “What kind of support would you like?” In one-on-ones: “How did the hand-off for that work go?” “Is there any extra support that would be useful here?”

Over the last 10 years, I’ve had to support many engineers and this advice is spot on. Some people require more guidance than others: they don’t know the codebase, they haven’t done that type of task before, they’re not good at it, etc.

Knowing how to tailor your mentoring and feedback based on their skillset on that particular task is crucial. It can be the difference between everyone having a good time, and chaos and disappointment.

Quoting Jacob Voytko in debugging Google Docs

Keeping up with the Google Docs theme, Jacob Voytko goes into detail how they once debugged an (initial) nondeterministic bug in Google Docs.

It was a fatal error. This means that it prevented the user from editing without reloading. It didn’t correspond to a Google Docs release. The stack trace added very little information. There wasn’t an associated spike in user complaints, so we weren’t even sure it was really happening — but if it was happening it would be really bad. …. I do it a few more times. It’s not always the 20th iteration, but it usually happens sometime between the 10th and 40th iteration. Sometimes it never happend. Okay, the bug is nondeterministic. We’re off to a bad start

Errors like this are always fun to debug /s

I really like the way they did it though. Using their automated tool and just constantly having it repeat something tedious over and over again.

As for the resolution, you can clearly see the benefit of working at such a big company where they also built the platform where the bug is happening.

contenteditable HTML attribute

While reading the Hacker News post: Show HN: Nash, I made a standalone note with single HTML file, one of the discussions centered around how the linked site used

   <div id="editor" contenteditable="true">

to do most of the heavy lifting.

To which Steve Newman, one of the cofounders of what became Google Docs replied with:

This one line was like 90% of the original implementation of Writely (the startup that became Google Docs; source: I was one of the founders).

The other 90% was all the backend code we had to write to properly synchronize edits across different browsers, each with their own bizarre suite of bugs in their contenteditable implementations :-)

Checking the contenteditable documentation, it’s been available since 2008. I always assumed that Google Docs were using JavaScript to dynamically render a textarea when the user focused on an element, then render that into HTML elements when the element lost focus. Like we used to do with WYSIWYG editors around that time. But the fact that it was “simpler” than that and that attribute launched a multimillion dollar product (and I’m going to assume Microsoft Office, Notion, etc are built similarly?), makes it more beautiful.

Thoughts on Armin’s Ugly Code and Dumb Things

A few weeks ago Armin Ronacher wrote a post about Ugly Code and Dumb Things, where he expands that you can’t always build real life products on top of clean, sparkling codebases. You have to sometimes just do the ugly dirty hack, to be able to launch.

Perfect code doesn’t guarantee success if you haven’t solved a real problem for real people. Pursuing elegance in a vacuum leads to abandoned side projects or frameworks nobody uses. By contrast, clunky but functional code often comes with just the right compromises for quick iteration. And that in turn means a lot of messy code powers products that people love — something that’s a far bigger challenge.

This is something that I’ve been discussing with my team lately. If we’re going to run a quick experiment, let’s do it fast and dirty (without compromising security and too much performance). If it succeeds, we will be able to build it in a more scalable way. But the key thing is that we could launch something much faster, so that we can realize if it’s worth it to focus on it long term.

This is in contrast to when you’re building a framework or foundry that other people will depend on:

But it took me years to fully realize what was happening here: reusability is not that important when you’re building an application, but it’s crucial when you’re building a library or framework.

And the final conclusion:

At the end of the day, where you stand on “shitty code” depends on your primary goal:

  • Are you shipping a product and racing to meet user needs?
  • Or are you building a reusable library or framework meant to stand the test of time?

Both mindsets are valid, but they rarely coexist harmoniously in a single codebase. Flamework is a reminder that messy, simple solutions can be powerful if they solve real problems. Eventually, when the time is right, you can clean it up or rebuild from the ground up.