Alcides Fonseca

40.197958, -8.408312

Posts tagged as Software Engineering

Why TUIs are back

Terminal User Interfaces (TUIs) are making a comeback. DHH’s Omarchy is made of three types of user interfaces: TUIs, for immediate feedback and bonus geek points, webapps because 37signals (his company) sells SAAS web applications and the unavoidable gnome-style native applications that really do not fit well in the style of the distro.

The same pattern occurred around 10 years ago in code editors. We came from the native editors of BBEdit, Textmate (also promoted by DHH), Notedpad++ and Sublime to Electro-powered apps like Atom, VSCode and all its forks. The hardcore, moved to vim or emacs, trading immediate feedback and higher usability for the steepest learning curve I’ve seen.

Windows

The lesson is clear: Native applications are losing. Windows is doing the GUI library standard joke. Because one API does not have success, they make up another one, just for that one to fail within the sea of alternatives that exist.

MFC (1992) wrapped Win32 in C++. If Win32 was inelegant, MFC was Win32 wearing a tuxedo made of other tuxedos. Then came OLE. COM. ActiveX. None of these were really GUI frameworks – they were component architectures – but they infected every corner of Windows development and introduced a level of cognitive complexity that makes Kierkegaard read like Hemingway.

Jeffrey Snover, in Microsoft hasn’t had a coherent GUI strategy since Petzold

Since then, Microsoft has gone through Winforms, WPF, Silverlight, WinUIs, MAUI without success. Many enterprise and personal desktop application still rely on Electron Apps, and the last memory of coherent visual integration of the whole OS I have is of Windows 98 or 2000.

It turns out that it’s a lot of work to recreate one’s OS and UI APIs every few years. Coupled with the intermittent attempts at sandboxing and deprecating “too powerful” functionality, the result is that each new layer has gaps, where you can’t do certain things which were possible in the previous framework.

— Domenic Denicola, in Windows Native App Development Is a Mess

Linux

The UI inconsistency in Linux was created by design. Different teams wanted different outcomes and they had the freedom to do it. GTK and Qt became the two reigning frameworks. While Qt is most known for it, both aimed to support cross-platform native development (once upon a time, I successfully compiled gedit on Windows, learning a lot about C compilation, make files and environment variables in the process) but are only widely used in Linux land. Luckily, applications made in the different toolkits can look okay-ish next to each other, something that the different frameworks on Windows fail to achieve. How many engineer-hours does it take to redo the windows Control Panel?

Given the difficulty in testing the million different combinations of distros, desktop environments and hardware in general, most companies do not bother with a native Linux application — they either address it using electron (minting the lock-down), or they let the open-source community solve it self (when they have open APIs).

macOS

Apple used to be a one-book religion. Apple’s Human Interface Guidelines used to be cited by every User Interface course over the world. Xerox PARC and Apple were the two institutions that studied what it means to have a good human interface. Fast forward a few decades, and Apple is doing the best worst it can to break all the guidelines and consistency it was known for.

Now, Apple has been ignoring Fitts’ law, making resizing windows near-impossible (even after trying to fix it) and adding icons to every single menu. MacOS is no longer the safe heaven where designers can work peacefully.

Electron

Everyone knows that the user experience of electron apps sucks. The most popular claim is the memory consumption, which to be fair has been decreasing over the last decade, but my main complaint (as I usually drive a 64GB RAM MacBook Pro) is the lack of visual consistency and lack of keyboard-driven workflows. Looking at my dock, I have 8 native apps (text mate and macOS system utilities) and 6 electron apps (Slack, Discord, Mattermost, VScode, Cursor, Plexampp). And that’s from someone who really wishes he could avoid having any electron app at all.

Let us take the example of Cursor (but would be true for VSCode as well). If you are in the agent panel, requesting your next feature, can you move to the agent list on the side panel with just the keyboard? Can you archive it? These are actions that should be the same across every macOS application, and even if there are shortcuts, they are not announced in the menus. And over the last decade, developers have been forgetting to add menus to do the same things that are available in their application (mostly because the application is HTML within its sandbox). For the record, Slack does this better than the others, but it’s not perfect.

Restarting from scratch

Together with Dart, Google wanted to design a new operating system, without all the legacy of Android, for new devices. It wanted a fresh UI toolkit (Flutter UI) but Google gave up on the project before a real product was launched. It’s one of those situations where having a monopoly (or a large enough slice of the market) is required to succeed.

Meanwhile, Zed did the same thing in Rust: they designed their own GPU-renderer library (GPUI) which is cross-platform. Despite the high-speed, it lacks integration with the host OS on itself, requiring the developers to add the right bindings. Personally, I would rather have a slow renderer that integrated with my OS than the extra speed.

TUIs

TUIs are fast, easy to automate (RIP Automator) and work reasonably well in different operating systems. You can even run them remotely without any headache-inducing X forwarding. When the native UI toolkits fail, we go back to basics. Claude and Codex have been very successful on the command-line: you focus on the interaction and forget about the operating system around you. You can even drive code and apps on cloud machines, or remote into your GPU-powered machine from your iPad. TUIs are filling the void left by Apple and Microsoft in the post-apocalyptic world where every application looks different. Which is good if you are doing art (including computer games), but not if your goal is to get out of the way of letting the user do their job.

What’s next

A checkbox is also part of an interface. You’re using it to interact with a system by inputting data. Interfaces are better the less thinking they require: whether the interface is a steering wheel or an online form, if you have to spend any amount of time figuring out how to use it, that’s bad. As you interact with many things, you want homogeneous interfaces that give you consistent experiences. If you learn that Command + C is the keyboard shortcut for copy, you want that to work everywhere. You don’t want to have to remember to use CTRL + Shift + C in certain circumstances or right-click → copy in others, that’d be annoying.

John Loeber in Bring Back Idiomatic Design

We need to go back to the basics. Every developer should learn the theory of what makes a good User Interface (software or not!), like Nielsen, Norman or Johnson, and stop treating User Design as a soft skill that does not matter in the Software Engineering Curriculum. In any course, if the UI does not make any sense, the project should be failed. And in the HCI course, we should aim for perfect UIs. It takes work, but that work is mostly about understanding what we need. The programming is already being automated.

Operating systems and Toolkits authors should drive this investment. They should focus on making accessible toolkits that developers want to use, and lower the barrier to entry, making those platforms last as long as possible. I do not necessarily argue for cross-platform support, but having one such solution would help reduce the electron and TUI dependency.

You probably don't need git worktrees

I have been trying to debug why Claude (and Cursor) often skips running “git pre-commit hooks”, despite being in the Claude.md. I have to remind it every prompt, and even then it fails to run them. They only show up in CI, failing immediately (without running the test-suite). Now, this is also happening to the Dragonboat team and I know it’s not a me-problem.

One of the things that I changed during this debugging was to make each agent clone their own repo — our repos are quite small — and I’ve kept up with this practice, as there is no real advantage for me in keeping a centralized local repo. In fact, this way I am sure they don’t change the centralized database, potentially interfering with each other. In fact, I had some issues where agents would reuse branches, mixing different PRs together.

But worktrees have some significant limitations. Not least of which is that their dependence on hardcoded fully-qualified paths written into configuration files not only makes worktrees non-portable, but also makes them DOA for any kind of containerized development environment. The worktree.useRelativePaths option helps here, but as of this writing it’s still relatively brand-new (Git v2.48 released Q1 2025), and not available in the version of git you probably have installed on your host machine or any of your container images. On top of that, VSCode’s support for relative worktrees with devcontainers is experimental, totally undocumented, and (as of this writing) unreliable at best.

— Avdi Grimm in You probably don’t need git worktrees

I wasn’t aware of all of those disadvantages, but it is so easy and fast to clone a new version. And even if you want to work offline, you can just clone from another local repo, like I did in 2007 when there was no wifi on campus.

Now if you have a multi-gigabyte repo, you might think twice about skipping worktrees. But then you have bigger problems to worry about.

The impact of LLMs on the Software Economy

There are a few people I recommend following to understand what is going on with software development:

  • Martin Fowler is probably my best reference in software engineering, both from the low-level Design Patterns point of view (where he made his fame), to more processes oriented and the more recent AI impact on the development processes. Even academics would agree with me on this one.
  • James Governor (Redmonk) do a more industry-wide macro view of the software development trends. They advise big companies on what route to take, and they estimate the money involved in these new trends. I would hire them if I was running a SaaS or devtools-focused VC. I met James last year at LisbonAI, and there is a large contrast between his wacky style and the sobriety of his writing.
  • Simon Willisson has been the new person in terms of AI development. I follow Simon since before Django had a 1.0 release, and he has become the reference in terms of AI development. He even has a paid newsletter with less content. Simon focus more on the exploratory side, and less on the engineering side, although he has started to document agents design patterns.

The fourth spot on my Mount Rushmore is up for debate. Tim Bray would blog a lot about the organizational side of things, but has now retired from that. I used to love the writing of Steve Yegge, but I think he has drank too much of his own cool aid.

The new kid on the block for me is Mitchell Hashimoto. He has the technical background — Vagrant, Terraform and Ghostty — and he does not mind sharing his path.

Specifically, I think two of his recent blog posts were what every developer needs to read:

Historically, the effort required to understand a codebase, implement a change, and submit that change for review was high enough that it naturally filtered out many low quality contributions from unqualified people. For over 20 years of my life, this was enough for my projects as well as enough for most others.

Unfortunately, the landscape has changed particularly with the advent of AI tools that allow people to trivially create plausible-looking but extremely low-quality contributions with little to no true understanding. Contributors can no longer be trusted based on the minimal barrier to entry to simply submit a change.

Vouch, an open source tool to explicitly keep track of trust in open source repos.

Agents will more readily pick open and free software over closed and commercial. At the time of writing this article, this is an objective truth. Independent research labs running experiments on popular models have found repeatedly that under diverse circumstances, models pick open and free alternatives over commercial. So far.

The Building Block Economy, which gives you a new perspective on the impact of LLMs on the software development economy and tooling.

Z3 Python in the Browser in 10 minutes

Last night, while I was catching up on email, I wanted to make use of my time and our Claude subscription. I decided to scratch an old itch.

Our aeon programming language has Liquid Types (e.g., {x:int | x > 0}) and we rely on an SMT solver to type check the implications of subtyping (e.g., when passing something of type {x:int | x > 3} to a function that accepts {x:int | x > 0}, we need to verify whether x > 3 -> x > 0, for all x).

But there was an issue: aeon is written in Python and relies on the z3 bindings that contain C++ code. We can run Python code in the browser with Pyodide, but the native libraries are not directly supported (at least this one, that relies on multi-threading).

On the other hand, there is a z3 port to web assembly (by alectryon’s Clément Pit-Claudel, no less) but it follows the C API, and has no browser-Python bindings.

So while I went through the ivory tower tall pile of emails, Claude reimplemented the z3 bindings in a different package, used the export to SMT-lib format feature, already present in z3, and passed that to the z3-wasm package.

I asked for examples, and piping the errors I found in the browser back to Claude, I gave up on being a middle man, and instructed Claude to use Rodney to interact with the browser directly (I was running this on a linux server, not my local machine). It then went ahead and made the examples work on Chrome, in a nice demo page. Unfortunately, it did not work on Safari due to the lack of stack switching support in web assembly, so I needed to make another prompt to fix that issue. It deployed to GitHub and Pipit automatically, with little effort.

Of course you get what you paid for: I provide no assurance that there are no bugs. But it’s useful enough for me to prepare some demos and materials for students that do not require any installation or compilation in their machines. That’s a win in my book.

And now, you have support for z3 in Python within the browser for your existing z3 Python projects, or just to play with z3 because the Python bindings are by far the most easy to just play with.

Flickr REST API design

Half of my education in URLs as user interface came from Flickr in the late 2000s.

[…]

This was incredible and a breath of fresh air. No redundant www. in front or awkward .php at the end. No parameters with their unpleasant ?&= syntax. No % signs partying with hex codes. When you shared these URLs with others, you didn’t have to retouch or delete anything. When Chrome’s address bar started autocompleting them, you knew exactly where you were going.

[…]

It was a beautiful and predictable scheme. Once you knew how it worked, you could guess other URLs. If I were typing an email or authoring a blog post and I happened to have a link to your photo in Flickr, I could also easily include a link to your Flickr homepage just by editing the URL, without having to jump back to the browser to verify.

Marcin Wichary (via Michael Tsai)

The advent of Single-Page Applications (through Angular and React) screwed over the beautiful URL design of the late 2000s, of which Flickr is one of the best examples.

Back then, APIs were designed for the public. But the facebookesque progressive siloing of the internet made big companies stop providing public documentation for most APIs, in order to control the clients (where the money is made).

If only we could solve the monetization of the internet…

Age-verification in Operating Systems and the Internet

The only way to prove that someone is old enough to use a site is to collect personal data about who they are. And the only way to prove that you checked is to keep the data indefinitely. Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law.
This is the age-verification trap. Strong enforcement of age rules undermines data privacy.

Waydell D. Carvalho

To this point, here are some of the recent changes:

Here’s where each of the “All Operating Systems must do age verification” laws are as of today.

- Brazil (Law 15.211) : Signed into law. Requirements in effect on March 17th, 2026.

- California (AB-1043) : Signed into law. Requirements in effect on January 1st, 2027.

- Colorado (SB26-51) : Passed Senate on March 3, 2026.

- New York (S8102A) : In Senate Committee.

Note: As of today, March 4th, Operating Systems developers have only 13 days remaining before the Brazilian law takes effect.

Related: In order to “incentivize” age verification, The Federal Trade Commission (FTC) has announced that they will ignore COPPA violations for software performing age verification.

The Lunduke Journal

As I have written before, social networks and short videos are a matter of public health. However, I disagree that is a matter of age — this affects adults as much as kids. But let us assume that the liberal side of me wants each and every one of us to make their own decision. Except minors, which are dependent on their guardian’s decisions (the American-centric way) or their government’s decisions (the European way).

The American approach is more suitable for a technologic translation — devices for kids (think iPhone 17e, with underage-mode enabled) require explicit permissions, either à priori, or an interactive prompt to their guardian’s devices.

The European approach is much more difficult — you need to use your government-issued ID certificate to authenticate on the web, which leads to the end of the anonymous internet. I believe this change is coming, but I would like to preserve the 2000s internet as the governance model, especially given how our global village navigates the geopolitical changes of the 21st century.

Regardless, internet websites require a way to asking the age of their users from either the user or the device. Users lie — everyone I know lied on website age checks, even after being 18, out of habit — so device-based checking is being instituted. Apple designed an API on the 26+ versions of their OSes, in response to the law changes mentioned above.

Mandating that operating systems even have accounts is insane. The Facebook dominance of the user-facing internet gives lawmakers the false impression that the whole internet follows the same siloed pattern. But the internet and operating systems are so much more than that: we should have the freedom to design operating systems (and internet protocols) however we like. If I want to design a user-less operating system, I should. Internet protocols should be designed by experts who understand how the internet works, with input from the social sciences to understand the impact.

Yes, we have a problem, but we are law-constraining the wrong things here. It’s like passing a law requiring all knives to have a fingerprint lock to only allow over 16 year olds to open it. Several internacional security and privacy researchers warn against these changes without proof that they have any impact.

A simpler alternative would be for a non-profit or a government authority to create whitelists of websites that are suitable for different age ranges, and let parents configure those whitelists in their kids devices.

But I am sure I can embed an http proxy in our university domain. The bottomline is that there is no technology that can replace good parenting.

A New Age Software Engineering Degree

What may happen is that software development involves less coding than it has in the past because of AI. At least coding by humans. So BLS is probably right about a decline in the need for computer programmers. At the same time, if software developers spend less time doing actual coding they may have more time for higher level (if that is the right term) thinking and involvement in design. Unless AI starts doing more of that. So maybe we will not need more of them. Or perhaps AI will make it possible for more people to be software developers who wouldn’t be that now. We’ll see I guess.

Computer Programming or Software Development by Alfred Thompson

Alfred analyses the difference between a programmer and a software developer. AI is replacing programmers (those that implement features identified by software developers), but not Software Engineers.

On the other hand, we might not be preparing our SE students for the next decade. We have good, core CS and Programming courses. But advanced courses are not up to par with what the market needs. This aligns with the Barbell approach, which is the closest I have seen to a good path for our SE education. We need good, pen-and-paper, fundamental courses, and we need up-to-date advanced courses that make use of AI and whatever comes next.

The main problem is that technology is moving faster than Universities can adapt. Most professors are researchers in their own niche, and most are not doing Software Engineering, but they do teach it. We need more cutting-edge engineers to come back to universities to teach.

Here in Portugal, we have incentives not to hire professionals (I am fighting this locally, and got two real-world engineers to teach Functional Programming with me) and our degrees have to stay static for three to four years. This does not work for this day and age when the development process changes so frequently, and professors are so busy to actually get some hands on experience. I am also fighting that, but that’s for some other post.

It's the end of anonymity in open-source as we know it.

There is no longer a curl bug-bounty program. It officially stops on January 31, 2026. […] Starting 2025, the confirmed-rate plummeted to below 5%. Not even one in twenty was real. The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live.

The end of the curl bug bounty by Daniel Stenberg

Early last year I defended that the internet needed to stop being anonymous, so that we can live among LLM-generated content. The end of the curl bug bounty program is another piece of evidence — if we cannot tie submissions to real-people, tracking their reputation and eventually blocking them from trying a second or third time.

PGP was probably a solution behind its time. On the other hand, maybe we were lucky of what we achieved with anonymous developers working together on the internet.

TigerBeetle Code Style

  • Declare variables at the smallest possible scope, and minimize the number of variables in scope, to reduce the probability that variables are misused.
  • There’s a sharp discontinuity between a function fitting on a screen, and having to scroll to see how long it is. For this physical reason we enforce a hard limit of 70 lines per function. Art is born of constraints. There are many ways to cut a wall of code into chunks of 70 lines, but only a few splits will feel right.

TigerBeetle codestyle

I remember when getting into Haskell, back in 2010: “If your function has more than 4 lines, it is wrong”. For me, that is more meaningful than the 80 character limit. Soft-wrap exists to adapt lines of any length to your own screen. However, managing the complexity of functions and code-blocks is way more important in my book.

I know you have larger functions in Haskell (especially with monads), but keeping functions within 4 lines makes it an interesting trade-off between badly-named functions and the complexity of each function.

I know when to break this rule, as do most senior programmers. However, junior programmers lack the sensitivity to make such decision. I love having a rule-of-thumb for newcomers who are not familiar with the ecosystem or programming in general.

Btw, the rest of the style guide is quite good! Except for the number of columns thing.

Foundations for hacking on OCaml

How do you acquire the fundamental computer skills to hack on a complex systems project like OCaml? What’s missing and how do you go about bridging the gap?

KC Sivaramakrishnan

KC gives several resources for students to get up to speed with contributing to OCaml.

One of the interesting resources is MIT’s the Missing Semester. This semester I created our own version of this course, covering git, docker, VMs, terminal/bash, testing, static analysis and LLMs for code.

While we cover how to do a Pull Request, I don’t believe students are ready to actually contribute. Reading large codebases is a skill that even our graduate MSc students don’t have. Courses are designed to be contained, with projects that need to be graded with few human effort, resulting in standard assignments for all the students.

I would love to run something like the Fix a real-world bug course Nuno Lopes runs. But being able to review so many PRs is a bottleneck in a regular course.

Simon Willison reinvents TDD

As software engineers we don’t just crank out code—in fact these days you could argue that’s what the LLMs are for. We need to deliver code that works—and we need to include proof that it works as well. Not doing that directly shifts the burden of the actual work to whoever is expected to review our code.

Simon defends that engineers should provide evidence of things working when pushing PRs onto other projects. I recently had random students from other countries pushing PRs onto my repos. However, I spent too much time reviewing and making sure it worked. I 100% agree with Simon on this, but I feel the blog post is a bit pessimistic in the sense that software engineers might only be verifiers of correctness.

Don’t be tempted to skip the manual test because you think the automated test has you covered already! Almost every time I’ve done this myself I’ve quickly regretted it.

This is my experience for user-facing software. But these days, I spend little time writing user-facing code other than compiler flags.

Needy programs

Notifications are the ultimate example of neediness: a program, a mechanical, lifeless thing, an unanimate object, is bothering its master about something the master didn’t ask for. Hey, who is more important here, a human or a machine?

Nikita Prokopov

Funny piece by Niki, reporting how the 2010+ software is needy, shown by subscriptions, notifications, what’s new panels, accounts. I wonder how much of this is because of the Facebook-inspired all-in VC-backed software. You need to collect statistics and pay server costs, even if your app could work perfectly offline.

VSCode started as an interesting alternative to IDEs. Now I can no longer use it in my classroom: notifications, status bars, sidebars, copilot all get in the way of showing (and navigating) code. I really want to go back to Textmate, but it lacks LSP support. Zed is the new kid in the block, but the collaborative aspect of it kinda of ruins it for me. I want a native editor that you pay once, and don’t get distracted by it. If I want to use AI, I want a second editor for that (Cursor 2.0 is moving in that direction, but still not there for me)

EU OpenSource Funding

The Portuguese government runs on top of OpenSource Software. There is a bunch of Java web applications, jQuery-powered websites, the Citizen Id Card runs on open source software and our public data instance also runs on OpenSource. Throughout many organizations, we use linux web servers, Android devices and many other server-side tools. The government even has a GitHub account where they publish some of their projects (and identify open-source dependencies). There is a report listing some of our dependance on Open Source Software.

So, OpenSource software is part of the necessary public infrastructure, just like roads or water sewage are. Both didn’t exist 200 years ago, but they are not expected by all citizens. Would you imagine a government that does not fund sewers? Well, ours does not fund OpenSource software.

But Germany does! Germany has invested more than 23 million euros into sixty projects. It’s time other countries follow the lead, and create a more sustainable environment for open source software.

Next-Cloud (an alternative to Office 365 or Google Apps that you can install within your company) are asking the EU to create an Europe-wide Sovereign Tech Fund.

If we want commercial and data independence from the US and China, we need to invest more in local-first alternatives, and promote the open source alternatives from which European companies can create their own products.

Familiarity-Driven Design

Why do I run Prometheus on my own machines if I don’t recommend that you do so? I run it because I already know Prometheus (and Grafana)[…]
This has a flipside, where you use a tool because you know it even if there might be a significantly better option, one that would actually be easier overall even accounting for needing to learn the new option and build up the environment around it. What we could call “familiarity-driven design” is a thing, and it can even be a confining thing, one where you shape your problems to conform to the tools you already know.

Chris Siebenmann

I do exactly the same thing. There is so much technology in my life that I need to reduce the number of tools, languages and frameworks that I use.

Hidden interface controls are affecting usability

It’s the year 2070. You are a 20 year recruit that is going to travel back in time 12-monkey style to try and save the world. You get to 2025, you find proof on a iPhone and you need to take a screenshot and send to a safe email address. Do you have a change at discovering how to take a screenshot?

The other day I was locked out of my car. I had my keys, but the key fob button wouldn’t work and neither would the little button on the door handle that normally unlocks the car. At this point, every action I had to take in order to get into the car required knowledge of a hidden control. Why didn’t I just use my key to get in? First, you need to know there is a hidden key inside the fob. Second, because there doesn’t appear to be a keyhole on the car door, you also have to know that you need to disassemble a portion of the car door handle to expose the keyhole.

Philip Kortum has a nice article on how this quest towards “clean” interfaces actually hurts usability.

No AI in Servo

Contributions must not include content generated by large language models or other probabilistic tools, including but not limited to Copilot or ChatGPT. This policy covers code, documentation, pull requests, issues, comments, and any other contributions to the Servo project.

A web browser engine is built to run in hostile execution environments, so all code must take into account potential security issues. Contributors play a large role in considering these issues when creating contributions, something that we cannot trust an AI tool to do.

Contributing to Servo (via Simon Willison)

Critical projects should be more explicit about their policies. If I had a critical piece of software, I would do the same choice, for safety. The advantage of LLMs are not that huge to be worth the risk.

Porting vs rewriting codebases from scratch

Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we’re undertaking this more as a port that maintains the existing behavior and critical optimizations we’ve built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.

Ryan Cavanaugh, on why TypeScript chose to rewrite in Go, not Rust (via Simon Willison)

Do not take career advice from engineers with 5+ years of experience

Advice people with long careers on what worked for them when they were getting started is unlikely to be advice that works today. The tech industry of 15 or 20 years ago was, again, dramatically different from tech today. I used to joke that if you knew which was was up on a keyboard, you could get a job in tech. That joke makes no sense today: breaking into the field is now very difficult, and getting harder every year.

Beware tech career advice from old heads, by Jacob Kaplan-Moss

The industry is undervaluing junior developers, by thinking LLMs can do their work. This is true at this instant, but junior developers have the potential to become senior developers.

I still remember years when my team did not have interns at Uber; and years when we did. During the time we did: energy levels were up, and excluding the intern I’d wager we actually did more. Or the same. But it was a lot more fun. All our interns later returned as fulltime devs. All of them are now sr or above engineers – at the same company still (staying longer than the average tenure)

Gergely Orosz

It is up to your faith whether LLMs can eventually be promoted to senior developers (or management). And if you believe it, you may need to reconsider your own job.