makeshift.computer

secure-er secrets in env

Between supply chain attacks, code execution CVEs, and AI shenanigans, plaintext .env files are starting to feel like a giant kick-me sign. So nowadays I keep my local dev secrets behind 1Password and direnv:

  • I’ve been a happy paying user of 1Pass for years, and they have a nifty CLI.
  • direnv works across ecosystems and generally does the right thing. Also it’s not written in javascript #sorrynotsorry

How it works

First, install 1Password CLI, and install direnv.

Let’s say you’ve got some code in ~/projects/awesome_ai_bot that needs an OPENAI_API_KEY environment variable. You save your openai key into your 1Pass local_apps vault with the name openai_api_key, which you can access with:

op item get openai_api_key --vault local_apps --fields credential --reveal

Then, in ~/projects/awesome_ai_bot/.envrc:

export OPENAI_API_KEY=`op item get openai_api_key --vault local_apps --fields credential --reveal`

When you cd ~/projects/awesome_ai_bot, direnv runs the file, 1Pass prompts once to decrypt, and Bob’s your uncle:

  • Access follows 1Password semantics. You confirm once and everything is unlocked for your session. If it’s been more than 10 minutes, or your 1Pass gets locked, etc, it will re-prompt.
  • The 1Pass prompt makes it less likely to get your secrets stolen in the background by, say, an npm post-install script.
  • Access is per shell and direnv loads/unloads based on cwd so secrets are not hanging around.
  • .envrc is safe to commit, assuming you have that privilege. If not, just put it upstream. This is super handy for client work, where I will have ~/projects/client_1/.envrc which then gets applied to ~/projects/client_1/backend, ~/projects/client_1/frontend, etc. I get the safety of this setup without necessarily imposing 1Pass + direnv into their processes.

even more secure-er secrets

If you have full control over the project, it’s even better to skip direnv and wire 1Pass into your run scripts directly. Then the secrets are only exposed to the relevant processes, vs your entire shell session.

# in `secrets.env`
OPENAI_API_KEY="op://local_apps/openai_api_key/credential"
op run --env-file="secrets.env" -- npm run dev

service accounts

1Pass also offers service accounts which allow for tighter scoping + fully automated access with no prompt to decrypt. In theory they are more secure, e.g. a rogue script can’t escalate access to other secrets after you authenticate. In practice I’ve found it not worth the overhead, especially since you have the bootstrap problem — OP_SERVICE_ACCOUNT_TOKEN is itself a sensitive secret. Defense happens in depth and IMO using 1Pass at all is already a big step up.

[ ... more ]

Dec 11, 2025, 7:07 AM

pull requests for everything

Nowadays it’s common to see developers juggling multiple claude codes (or codex, or aider, or …) AI has woven itself into coding workflows because developers dog‑food the tools that help them build software, so those tools tend to evolve the fastest.

But I assert we have another big advantage: we have pull requests (“PRs”) and change tracking.

For those unfamiliar: When you change code you don’t just hit “Save” like on a spreadsheet. You package the changes into a “diff,” open a pull request on GitHub, and let the platform stage the diff. Team members review the changes and tools run automated tests. Then you edit, reject, or merge the change. This workflow gives you:

  • parallelism: multiple branches can be open simultaneously, each representing a different line of development
  • safety: changes are isolated until approval, and can be easily (well, most of the time) reverted
  • visibility: the history of what was changed, why, and by whom is always available

Now think about an AI agent that’s refactoring code or adding new features. All it needs to do is submit a PR with its changes — the rest of the workflow stays the same. Devs can kick off 10 agents, go make some coffee, and come back to changes that can be reviewed and worked. The system provides natural safety even when the AI goes completely off the rails.

No such conventions exist for other types of knowledge work, e.g. there are no “pull requests” for email subscriber lists or marketing funnels. AI agents in those areas typically write directly into production, or rely on ad‑hoc scripts that lack version control, review, or rollback. So you either:

  • let the AI go full yolo: give it unrestricted access, risk coming back to a hole in the ground where your ad campaign used to be, or…
  • babysit it: sandbox the AI, constantly monitor, and isolate it from the live system to prevent damage

Both scenarios kind of suck, and I think the lack of PR-style workflows for other knowledge domains will continue to hold back the usefulness of AI systems… at least until someone builds it :)

[ ... more ]

Aug 9, 2025, 10:11 AM

Automation Should Be Like Iron Man, Not Ultron

Another way I’ve heard this put is “build mech suits, not robots.”

Iron Man’s exoskeleton takes the abilities that Tony Stark has and accentuates them. Tony is a smart, strong guy. He can calculate power and trajectory on his own. However, by having his exoskeleton do this for him, he can focus on other things. Of course, if he disagrees or wants to do something the program wasn’t coded to do, he can override the trajectory.

Ultron, on the other hand, was intended to be fully autonomous. It did everything and was, basically, so complex that when it had to be debugged the only choice was (spoiler alert!) to destroy it.

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” […] This is called the Leftover Principle. You automate the easy parts and what is “left over” is done by humans. In the long run this creates a very serious problem. The work left over for people to do becomes, by definition, more difficult. […] Taken to its logical conclusion, this paradigm results in a need to employ impossibly smart people to do impossibly difficult work. Maybe this is why Google’s recruiters sound so painfully desperate when they call about joining their SRE team.

  • Obvious question in the “junior developers completely replaced” rhetoric: if all the junior developers get replaced, where will the senior developers come from? Likewise for other fields e.g. accountants.
  • Absent a full Matrix / Skynet situation, there will always be, by definition, a need for skilled troubleshooters.

[ ... more ]

Jun 5, 2025, 7:40 AM

hacking excerpts in astro

Update Dec 11 2025: I’ve scrapped the excerpt pre-rendering approach entirely after switching to MDX. The MDX rendering pipeline is even harder to hook into than markdown, with no (straightforward) programmatic API exposed. The blog now renders full post content on the feed with line-clamp for visual truncation. This approach ensures consistent rendering behavior at the cost of shipping more HTML per page. Pagination helps keep page sizes reasonable.


I wanted to do excerpts to give the main blog page more of a “feed” than a “post list” feeling. Surprisingly there wasn’t a straightforward / built-in way to do it in astro. The markdown => HTML conversion seems to happen as part of loading a post collection, and any downstream methods like render() just reuses that HTML.

When I looked around there were two bits of prior art:

  • The top google result renders the post body with an external library and then truncates the result. I didn’t love this because it seemed error-prone, e.g. it’s easy to truncate through the middle of a codeblock. Using markdown-it also means that none of the astro setup, e.g. syntax highlighting, is reused.
  • I also tried out a library from the astro integrations list, but its output heavily favored simplicity and dropped a lot of things I cared about, e.g. blockquotes.

Eventually I read through the astro code + the library and settled on the following approach:

import { createMarkdownProcessor } from "@astrojs/markdown-remark";
import { fromMarkdown } from "mdast-util-from-markdown";
import { toMarkdown } from "mdast-util-to-markdown";

const renderer = await createMarkdownProcessor();

async function getExcerpt(post: CollectionEntry<"posts">, maxChars: number) {
  const rawBody = post.body || "";

  // convert to an AST to work with structured data
  const parsed = fromMarkdown(rawBody);
  // generate AST for excerpt and convert it back to plain text
  const truncated = toMarkdown(truncateMarkdown(parsed, maxChars));

  // use astro's remark setup to render to html
  return (await renderer.render(truncated)).code;
}

function truncateMarkdown(root: node, maxChars: number) {
  // do stuff to the tree here
}

And then in usage:

<Fragment set:html={getExcerpt(post, post.excerptLimit)} />

This isn’t perfect since astro passes in a bunch of config to both createMarkdownProcessor and the render call to make fancy things like relative URLs work. Unfortunately, I couldn’t find a straightforward way to replicate all that, so for now this setup will have to do. I also have no idea if this works with mdx or how hard that would be. Final output (as of this writing) here.

[ ... more ]

Jun 4, 2025, 11:03 AM

finally setting this thing up

Jun 3, 2025

read field reports, ignore speculation

The easiest way to dodge the The Creative World’s Bullshit Industrial Complex in a hot field is to pay attention to case studies and field reports rather than breathless hype and over-generalized speculation.

Field reports are first or second hand accounts of concrete use cases and things actually happened. e.g. “here’s what I ran into when I built X,” “our team tried Y and here’s how it went,” etc.

Speculation may include a grain of reality (or not!) but is usually written by someone on the outside and followed up with baseless conjecture. e.g. “X built Y in 3 days and this is why AI is coming for all our jobs,” “the new oculus is so good soon we will all be living in the metaverse,” etc.

Jan 27, 2025

Real Intelligence:

Case of AI usage outside of the “usual” techno-sphere.

artificial intelligence (AI) was now handling some 40% of customer service inquiries and that that number had increased from just 10% in a few short months.

Someone else, listening in on the conversation, asked if his in-house customer service team, seeing this trend, feared for their jobs? But Kevin said no, that they loved the AI because it enabled them to spend more time solving the customer service inquiries that required depth of touch

there are small problems that are best handled quickly and complicated problems that are best handled carefully. Further, while handling the small problems quickly can show up almost immediately on the income statement in the form of revenue retention, it’s handling the complicated problems carefully that can create tremendous lifetime value and turn your customers into zealots who will proselytize for your brand.

So an interesting thing to think about is which problems in your business might require real intelligence. Because in the end, that’s my hope for AI. That it can take away from us the things we spend time on that make us less real and less human, so, like Rylee + Cru’s customer service team, we can spend more time on the things that make us just that.

[ ... more ]

Jan 20, 2025

real learning is effortful

Real learning (the kind that sticks) requires active effort and engagement with the material: building a toy project, writing a critical essay, etc. Passive consumption (especially of short form media) should be treated as entertainment more than learning.

The rule of thirds applies here. The valence when e.g. reading the average Twitter post tends to be pretty flat. There are no “tough” sessions of scrolling the feed (or, conversely really “great” sessions), so how much growth is really taking place?

[ ... more ]

Jan 7, 2025

rule of thirds

When you’re chasing a big goal, you’re supposed to feel good a third of the time, okay a third of the time, and crappy a third of the time… and if the ratio is roughly in that range, then you’re doing fine.

Source: Bravey, Alexi Pappas

This probably applies to sustainable growth / “productive discomfort” in general. Corollaries: If you’re consistently feeling good, say, >50% of the time, you’re not pushing hard enough to grow. If you’re feeling crappy >50% of the time, you’re pushing too hard to be productive.

It may also by why small + consistent sessions are recommended over long but infrequent ones, whether that’s workouts or knowledge work like writing. In addition to volume + habit benefits, you get the full “range” of sessions. In 3 1H sessions you can have a good one, a tough one, and an okay one. In contrast, you’re unlikely to have a great + okay + tough combo in 1 3H session.


From Dan John in Never Let Go:

In a group of five workouts, I tend to have one great workout, the kind of workout that makes me think in just a few weeks I could be an Olympic champion, or maybe Mr. Olympia. Then, I have one workout that’s so awful the mere fact I continue to exist as a somewhat higher form of life is a miracle. Finally, the other three workouts are the punch-the-clock workouts: I go in, work out, and walk out. Most people experience this.

Not quite thirds, but directionally tracks.

[ ... more ]

Jan 7, 2025

on shortification of “learning”

Learning is not supposed to be fun. It doesn’t have to be actively not fun either, but the primary feeling should be that of effort. It should look a lot less like that “10 minute full body” workout from your local digital media creator and a lot more like a serious session at the gym. You want the mental equivalent of sweating.

If you are consuming content: are you trying to be entertained or are you trying to learn? And if you are creating content: are you trying to entertain or are you trying to teach? You’ll go down a different path in each case. Attempts to seek the stuff in between actually clamp to zero.

Unless you are trying to learn something narrow and specific, close those tabs with quick blog posts. Close those tabs of “Learn XYZ in 10 minutes”. Consider the opportunity cost of snacking and seek the meal - the textbooks, docs, papers, manuals, longform. Allocate a 4 hour window. Don’t just read, take notes, re-read, re-phrase, process, manipulate, learn.

[ ... more ]

Jan 2, 2025

readwise + reader api notes

A few weeks back I switched from Raindrop to Readwise for my reading, highlights, and related notes. The main reason for switching was the Kindle integration, but part of my criteria is an API integration where I can automatically collect + archive the data into my personal cloud. It ended up being a bit of a learning curve, so below are notes for anyone who might be interested in doing the same:

The Readwise umbrella has two apps: Readwise and Reader. Readwise focuses on highlights + reviewing highlights, whereas Reader is a web bookmarking + highlighting tool that’s more akin to Raindrop. The APIs are separate, with their own docs + internal concepts: Readwise API Reader API

  • Consistent with the apps, the Readwise API focuses on “Highlights” + “Books” and the Reader API centers around “Documents” which can be links, highlights, and notes.
  • Readwise has many integrations, including with Reader. So a saved link + highlights in Reader will also show up in Readwise (despite the “book”-centric terminology). Saved links without highlights will not show up in Readwise.
  • Somewhat annoyingly, the two use completely different id spaces. e.g. if I have “17 ChatGPT Use Cases”, it might have user_book_id 123 in Readwise but document_id asdf in Reader. There is an indirect one-way lookup between the two: the Readwise record contains a unique_url field that has the format https://read.readwise.io/read/asdf. Other than that, the only way to tell if a Reader item is “the same” as a Readwise item is via other attributes like article title or url.
  • Another idiosyncrasy: highlight entries from the Readwise API contain location information, i.e. “where in an article” or “what page in the book” the highlight came from. The Reader API does not have this information – even for items generated by the Reader app.
  • Finally: the Reader API recently added a withHtmlContent which returns a document’s scraped HTML from Reader. Readwise has no equivalent of this field.

So in my case, where I want the maximum information around my reading + highlights + notes, I have to sort of stitch together the two APIs. Roughly:

  • Pull non-highlight items that get saved in Reader so I have the most complete list of links / articles / videos / etc.
  • Readwise is the source of truth for highlights + metadata. It contains the most info including location information.
  • When an item gets saved in reader and then highlighted later (i.e. in between archival script runs), use the unique_url above to link the original Reader item to the Readwise item for highlight information.

Despite some workarounds, this API integration has been working alright so far. The products themselves are also pretty top notch. Now I just have to go and backfill all the Reader html_content for my items from before they introduced it…

[ ... more ]

Dec 10, 2024

local secrets automation with 1pass

One of the annoying things about running the personal cloud is managing secrets (API keys, google creds, etc) for the bits that run locally on my laptop.

So far I’ve just been shuffling these creds around local-only env files tied to a couple of start scripts. It’s kind of a pain cause the files can’t be shared or committed to GitHub for obvious reasons so I have to walk on glass every time I touch them. And lord help if I accidentally delete one of the files or get them mixed up somehow.

Last week I finally got tired of the nonsense and automated it via 1Pass. Apparently they have a nifty command line package that makes this pretty easy. (And if you’re not running 1Pass or some password manager by now… What are you doing??)

First, set up a new env file with 1pass urls. local_apps is a separate vault I keep local dev secrets in, just for organization’s sake.

DATABASE_URL="op://local_apps/server_phoenix_settings_prod/database_url"
SECRET_KEY="op://local_apps/server_phoenix_settings_prod/secret_key"
AWS_ACCESS_KEY_ID="op://local_apps/server_aws_prod/username"
AWS_SECRET_ACCESS_KEY="op://local_apps/server_aws_prod/credential"
GOOGLE_CLIENT_ID="op://local_apps/server_google_prod/username"
GOOGLE_CLIENT_SECRET="op://local_apps/server_google_prod/credential"

Then op run injects them into the environment:

op run --env-file="./.deploy.env" -- ./bin/server start

Now the local app secrets have all the usual 1Pass convenience: history tracking, sharing between machines, encryption at rest, etc. Plus I can (finally!) commit the 1pass env files to GitHub as part of the build.

[ ... more ]

Dec 3, 2024

extracting brave browsing history

As part of building my personal cloud, I wanted to back up my browser history. This is useful for info mining and search purposes, but also because Brave (or Chromium?) starts dropping data after 90 days. Who knew. The 90 day limit is hard-coded, so data loss is unavoidable without an export.

So first, the location. On Macs this is ~/Library/Application Support/BraveSoftware/Brave-Browser/[Default || Profile N]/History/brave_history.sqlite.

For chrome, the location is similar ~/Library/Application Support/BraveSoftware/Google/Chrome/[Default || Profile N]/History.

The history is kept in a sqlite database, either brave_history.sqlite or History. It’s usually locked while the browser process is running, but a quick hack is to make a copy of the file to a temp location to make it read/write-able.

From there, the usual sqlite exploration tools work. I recommend litecli, by the same folks behind the excellent pgcli. An example query to extract browsing history into denormalized rows to throw into a csv or what have you:

select
  v.id,
  v.visit_time,
  u.url as url,
  u.title as url_title
from
  visits v
inner join urls u
  on v.url = u.id;

A quick note on visit_time: it is not milliseconds from unix epoch, as I had initially expected. It’s actually microseconds from the win32 epoch, which is Jan 1 1601 00:00:00 UTC. To get to unix epoch millis, subtract 11644473600000000 from visit_time and divide by 1000.

[ ... more ]

Nov 19, 2024

aspiration-first vs reality-first

Over the years I’ve noticed people seem to have two fundamental orientations towards goals: aspiration-first and reality-first.

Aspiration-first thinking starts with an external or motivational stance. For example: “I need to lose 20 lb before summer to fit into my beach clothes / reach a certain BMI / etc” Or in business: “We need $1MM ARR this year because that’s the benchmark for companies of our stage.” From there, achievement is throwing stuff against the wall till something hits the right trajectory.

Reality-first thinking starts with understanding the reality of a situation or system and works from there. For example: “Studies suggest sustainable weight loss is 300-500kcal deficit per day, so I should lose Xlb by summer. But also I like eating, so maybe (X - 5)lb is a better goal.” Or: “Historically our revenue growth has been X per month, but our sales process isn’t as efficient as it could be, so with some tweaks we should be able to get it to (120% of X) per month.”

This rhymes a bit with “top-down” vs “bottom-up,” but I don’t think they are the same. If an experienced CEO says “we should get our revenue up 15% within the next 6 months,” that’s top-down but could be reality-first because that goal is informed by a long career in similar companies where they have increased revenue by 15% in 6 months. Likewise, a bottom-up goal might be “reduce server costs 20%” in service of reducing burn rate… But that 20% number could be completely aspirational with no basis in reality.

Both approaches have their pro’s and con’s. Reality-first has the obvious advantage of being easier to reason about: either execution is off, or an assumption about reality was wrong. In that way, the lines of accountability are clearer. There’s no “shucks we tried, but…” However, sometimes a situation is so uncertain that you have no choice but to be aspirational. (Though I’d argue that “setting goals” is less useful then, vs taking a stance of “learning about reality.”) Lastly: the “big hairy audacious” goals that are popular in management literature are aspirational by definition, so they may be necessary for certain types of work.

To the surprise of absolutely nobody, I favor the reality-first approach as a developer-slash-systems-person. That might mean my goals are not as big, hairy, and audacious as they could be but… I can live with that :)

[ ... more ]

Oct 15, 2024