Gen AI, Humans, and Opinions
Context #
These are my own (handwritten) thoughts, and are not intended to reflect that of my employer. And yes, I use em-dashes and semicolons — probably far too frequently.
For this article, an opinion is something that we feel should or should not happen. For example, "Only one primary action on a page" is an opinion; using file-based convention over configuration (e.g. Ruby on Rails) is another.
Do opinions matter anymore? #
In the age of agentic coding, where it's a lot easier to just spin up an AI agent and hack away to get any given solution, what purpose do opinions have?
- They help humans and AI go faster: it's easier to be given a set of rules and conventions and go within it, than to have to reinvent the wheel every time you want to do something.
- They provide consistency: Both humans and AI copy existing patterns; if there exist 200 ways of doing the same thing, then the solution you're working on will likely combine ideas from all 200 and create the 201 way. This also blocks teams from making simpler changes that scale out to large areas.
- They can account for out-of-the-norm situations: for example, Gen AI currently sucks at building accessible frontend software. To counter that, we can bake certain opinions about our components so that generated code is more accessible than it otherwise would be. Additionally, it allows us to introduce novel patterns or tricks that give us a competitive advantage (e.g. faster software, fewer steps, neat designs, etc.)
At what scale do opinions operate? #
Perhaps like the Testing Pyramid of old, there are few opinions that live at a global level, but are also more important than localized opinions. For example:
- Global opinions like "use Ruby on Rails conventions", "React on the frontend", etc. help steer the whole company to be consistent and fast
- Organization opinions like "we only use Signals, not React State" help orgs be consistent and fast, but may not necessarily apply to the whole company
- Library opinions like "our design system's Button does not provide a size prop" helps design systems ensure that it can fulfill its mission of scalable and consistent designs
- Team opinions like "this flow works best based on our user research" enable teams to make decisions in areas that they have the most context and research
Humans are the opinion-makers #
AI can share context on why other people have opinions, or what the common opinion is, but ultimately it's up to humans to decide on which opinions matter.
Given the current capabilities of AI, it is also largely up to the humans to ensure those opinions are enforced; for example, it's still a frequent occurrence to have AI disregard previous commands or instructions — though time will tell if this remains true.
Humans are (usually) the users #
It's important to remember that, for a majority of the software we make, humans are the ones that interact with it; they're the ones that ultimately pay us. It's up to us to make sure that where humans are involved, that our opinions about the human experience are maintained and not slopified.
What's the alternative? #
AI has changed the game on how much code can be generated by anyone. In this world, does it make sense to have opinions and guardrails? Why don't we just let AI implement every feature in any way it sees fit; could we not just later get AI to update or fix things?
When left to its own devices, it generally leads to:
- The human cognitive load is heavier: it's harder to write or review code if every PR is a novel solution
- The AI token usage is larger, and the potential for context overload is higher: AI has to go explore existing solutions in the codebase or invent its own, driving up costs and slowing down implementations
- Previous: Contextually Styling Components