Fantastic piece by Mark Nottingham on the future and openness of the Internet!
New applications and networks appear daily, without administrative hoops; often, this is referred to as “permissionless innovation which allowed things the Web and real-time video to be built on top of the network without asking telecom operators for approval
Yes, the internet is a huge, but unlikely success that (I believe) was only possible because it moved faster than regulatory and legislative bodies could understand it.
On the other hand, the Australian eSafety Regulator’s effort to improve online safety – itself a goal not at odds with Internet openness – falls on its face by applying its regulatory mechanisms to all actors on the Internet, not just a targeted few. This is an extension of the “Facebook is the Internet” mindset – acting as if the entire Internet is defined by a handful of big tech companies. Not only does that create significant injustice and extensive collateral damage, it also creates the conditions for making that outcome more likely (surely a competition concern). While these closed systems might be the most legible part of the Internet to regulators, they shouldn’t be mistaken for the Internet itself.
Yes, countries are regulating something that they do not own (the internet), without considering that (critical, public and international) infrastructure’s wellbeing. There are no border controls on the internet, and while I agree there should be regulation and laws on what you can do with the internet, the internet itself (the infrastructure) should not be regulated.
Likewise, the many harms associated with the Internet need both technical and regulatory solutions; botnets, DDoS, online abuse, “cybercrime” and much more can’t be ignored. However, solutions to these issues must respect the open nature of the Internet; even though their impact on society is heavy, the collective benefits of openness – both social and economic – still outweigh them; low barriers to entry ensure global market access, drive innovation, and prevent infrastructure monopolies from stifling competition.
This is where I think Mark is wrong. The unlikely success of the internet is coming to an end, due to the economics of LLM-generated content. If we want the internet to remain open, it should remain open to humans and agents alike. If everyone has an OpenClaw agent running around, they multiply their internet footprint by 1000x or more. ISPs will notice, and change the pricing and economics of the internet. As I warned before, the signal-to-noise ratio will decrease substantially and something alternative will arise from the Internet’s ashes.
As any unix aficionado, I have my own stash of custom commands. At some point in time, I mimicked Pedro Melo’s idea of starting all of them with an underscore, to have a more precise auto-complete.
Fast-forward some years, and in my macOS and ubuntu shell, I have plenty of system scripts that start with one or two underscores, undermining the autocomplete advantage.
Brandon Rhodes proposes to use the comma. Personally, I am not a fan of having a punctuation symbol that has other meaning in programming languages inside a shell that parses input as a programming language (zsh these days). But this is the type of yak shaving that automates your workflows.
It’s impossible to keep up with all the new developments in the LLM-era. However, one thing has been true: they never stopped improving.
Malte Skarupke explains How LLMs Keep on Getting Better, covering a few of the different visible and invisible aspects of LLMs that have been worked on over the past couple of years. It’s a really good overview for those who are not into the weeds of it.
(via Antónia)
An excellent layman’s recap on the dependency (in terms of defense, but also economy) that Europe has on the US tech. What happens if we cannot have US-owned operating systems in our mobile phones? Or we cannot buy American brands for our hospital computers and servers? Will you still receive emails or direct messages?
I will continue my quest to move out of gmail to something European. Unfortunately, Portuguese SAPO is no longer an alternative, so I will have to go for something German, Dutch or Swiss.

via Aws Albarghouthi
So Feyman did the 15 competing standards joke before xkcd. And apparently, he did some ocasional industrial consulting.
Apesar de a Polícia Municipal assegurar que a remoção de bicicletas incide sob veículos abandonados, Paula, a moradora que denunciou a situação, garante ao LPP que as bicicletas removidas na Rua Lopes tinham um aspecto novo ou em bom estado – situação que as suas fotografias comprovam.

MUBi assinala ainda que as bicicletas foram “removidas sem qualquer registo ou informação no local, impossibilitando o direito ao contraditório e deixando os seus legítimos proprietários na convicção de terem sido vítimas de furto”, e manifesta “a sua profunda preocupação e repúdio” em relação à acção levada a cabo pela autoridade policial municipal.
Também me incomodam as bicicletas estacionadas em passeios. Mas a solução não é roubá-las aos donos, mas sim oferecer aqueles cacifos de bicicletas que tanto sucesso têm em Londres.
There is no longer a curl bug-bounty program. It officially stops on January 31, 2026. […] Starting 2025, the confirmed-rate plummeted to below 5%. Not even one in twenty was real. The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live.
— The end of the curl bug bounty by Daniel Stenberg
Early last year I defended that the internet needed to stop being anonymous, so that we can live among LLM-generated content. The end of the curl bug bounty program is another piece of evidence — if we cannot tie submissions to real-people, tracking their reputation and eventually blocking them from trying a second or third time.
PGP was probably a solution behind its time. On the other hand, maybe we were lucky of what we achieved with anonymous developers working together on the internet.
Every time I play Uno, I have to ask the owner of the game (or the group), exactly which rules are we playing with. There are multiple variations of the rules, and yet, I never played with the correct ones, explained in the video below.
However, I still like to play with my own set of rules, that gathers several variations that I’ve experience and found to be fun. In particular:
Yes, these rules might make the game longer, but on average they make it much shorter and will give you plenty of laughs. Do try it and let me know if it worked for you or not.
- Declare variables at the smallest possible scope, and minimize the number of variables in scope, to reduce the probability that variables are misused.
- There’s a sharp discontinuity between a function fitting on a screen, and having to scroll to see how long it is. For this physical reason we enforce a hard limit of 70 lines per function. Art is born of constraints. There are many ways to cut a wall of code into chunks of 70 lines, but only a few splits will feel right.
I remember when getting into Haskell, back in 2010: “If your function has more than 4 lines, it is wrong”. For me, that is more meaningful than the 80 character limit. Soft-wrap exists to adapt lines of any length to your own screen. However, managing the complexity of functions and code-blocks is way more important in my book.
I know you have larger functions in Haskell (especially with monads), but keeping functions within 4 lines makes it an interesting trade-off between badly-named functions and the complexity of each function.
I know when to break this rule, as do most senior programmers. However, junior programmers lack the sensitivity to make such decision. I love having a rule-of-thumb for newcomers who are not familiar with the ecosystem or programming in general.
Btw, the rest of the style guide is quite good! Except for the number of columns thing.
How do you acquire the fundamental computer skills to hack on a complex systems project like OCaml? What’s missing and how do you go about bridging the gap?
KC gives several resources for students to get up to speed with contributing to OCaml.
One of the interesting resources is MIT’s the Missing Semester. This semester I created our own version of this course, covering git, docker, VMs, terminal/bash, testing, static analysis and LLMs for code.
While we cover how to do a Pull Request, I don’t believe students are ready to actually contribute. Reading large codebases is a skill that even our graduate MSc students don’t have. Courses are designed to be contained, with projects that need to be graded with few human effort, resulting in standard assignments for all the students.
I would love to run something like the Fix a real-world bug course Nuno Lopes runs. But being able to review so many PRs is a bottleneck in a regular course.
To really understand a concept, you have to “invent” it yourself in some capacity. Understanding doesn’t come from passive content consumption. It is always self-built. It is an active, high-agency, self-directed process of creating and debugging your own mental models.
— François Chollet (via Simon Willison)
It’s a rephrasing of our “The best way to understand something is to teach it to someone else”. And that’s why I still love my job.
I use Hacker News as a tech and economy news feed, but I don’t necessarily comment or upvote a lot.

Just like Youtube or Spotify’s wrapped (I personally use Last.FM), there is this fun HN wrapped that was done with a tongue-in-cheek style that I particularly love.
Like last year, my 2025 music trends are quite steady:

My Last.FM shows me that I did not listen to a lot of new music. Here are the 2025 albums I have added to my library:
While not A-side, I have also enjoyed the two alternative releases by Linkin Park (“A Capella:https://consequence.net/2025/01/linkin-park-a-cappella-album-from-zero/) and The Mars Volta (Lucro Sucio; Los Ojos Del Vacio) but the original material still plays more on my speakers.
In hindsight, I have spent very little time searching for new bands, other than Majestica. One interesting Top40 source of recommendations is the Hitster boardgames, where you try to create a timeline of popular songs just by Spotify’s 30 second preview.
As software engineers we don’t just crank out code—in fact these days you could argue that’s what the LLMs are for. We need to deliver code that works—and we need to include proof that it works as well. Not doing that directly shifts the burden of the actual work to whoever is expected to review our code.
Simon defends that engineers should provide evidence of things working when pushing PRs onto other projects. I recently had random students from other countries pushing PRs onto my repos. However, I spent too much time reviewing and making sure it worked. I 100% agree with Simon on this, but I feel the blog post is a bit pessimistic in the sense that software engineers might only be verifiers of correctness.
Don’t be tempted to skip the manual test because you think the automated test has you covered already! Almost every time I’ve done this myself I’ve quickly regretted it.
This is my experience for user-facing software. But these days, I spend little time writing user-facing code other than compiler flags.
Notifications are the ultimate example of neediness: a program, a mechanical, lifeless thing, an unanimate object, is bothering its master about something the master didn’t ask for. Hey, who is more important here, a human or a machine?
Funny piece by Niki, reporting how the 2010+ software is needy, shown by subscriptions, notifications, what’s new panels, accounts. I wonder how much of this is because of the Facebook-inspired all-in VC-backed software. You need to collect statistics and pay server costs, even if your app could work perfectly offline.
VSCode started as an interesting alternative to IDEs. Now I can no longer use it in my classroom: notifications, status bars, sidebars, copilot all get in the way of showing (and navigating) code. I really want to go back to Textmate, but it lacks LSP support. Zed is the new kid in the block, but the collaborative aspect of it kinda of ruins it for me. I want a native editor that you pay once, and don’t get distracted by it. If I want to use AI, I want a second editor for that (Cursor 2.0 is moving in that direction, but still not there for me)

Anthropic released a paper showing that different modals activate the same features. This is the kind of fundamental work I support being done over neural-networks. This might lead to an increase in trust on the use of LLMs and other foundational models.
The Portuguese government runs on top of OpenSource Software. There is a bunch of Java web applications, jQuery-powered websites, the Citizen Id Card runs on open source software and our public data instance also runs on OpenSource. Throughout many organizations, we use linux web servers, Android devices and many other server-side tools. The government even has a GitHub account where they publish some of their projects (and identify open-source dependencies). There is a report listing some of our dependance on Open Source Software.
So, OpenSource software is part of the necessary public infrastructure, just like roads or water sewage are. Both didn’t exist 200 years ago, but they are not expected by all citizens. Would you imagine a government that does not fund sewers? Well, ours does not fund OpenSource software.
But Germany does! Germany has invested more than 23 million euros into sixty projects. It’s time other countries follow the lead, and create a more sustainable environment for open source software.
Next-Cloud (an alternative to Office 365 or Google Apps that you can install within your company) are asking the EU to create an Europe-wide Sovereign Tech Fund.

If we want commercial and data independence from the US and China, we need to invest more in local-first alternatives, and promote the open source alternatives from which European companies can create their own products.

If ChatGPT can produce research papers that are indistinguishable from what most scientists can write, then maybe scientists can focus on actually advancing science—something that ChatGPT has thus far proven unable to do.
— Beyond papers: rethinking science in the era of artificial intelligence by Daniel Lemire
Looking at the proceedings of our conferences over the past few years, I find that most of the papers are simply uninteresting. Moreover, it seems that every first-year PhD student is now required to write a systematic review on their topic — supposedly to learn about the field while producing a publication.
Let me be blunt: every systematic review I’ve read has felt like a waste of time. I want to read opinionated reviews written by experts — people who have seen enough to have perspective — not by PhD students who have just skimmed the past decade of papers on Google Scholar.
We need far fewer papers (I’m doing my best to contribute to that cause), and the ones we do publish should be bold, revolutionary, and even a little irreverent. We need innovation and the courage to break expectations. Incremental research has its place, but that doesn’t mean it always needs to be published.
To make this possible, evaluation committees — both nationally and within universities — must rethink their processes to move away from bean-counting metrics. Our current incentive system discourages genuine peer review, and even when proper reviews happen, they often waste effort on work that adds little value.
Otherwise, yes — the bean-counting-reinforcement-learning AIs will take our jobs.

Source: Caught Stealing (via the always alert jwz)
It’s also funny that it claims to be protected by US jurisdiction (where it actually should be protected by the laws of the market where it was bought or rented (or accessed?). I also wonder if the next US president will defend US economy or protect their citizens.