Alcides Fonseca

40.197958, -8.408312

A rant about Eduroam

Eduroam is the European university network system that allows me to authenticate in any university with the email provided by my current (or former) universities.

While this sounds like a good idea, I would rather have open wireless all around campus. Here are a few scenarios that would be improved:

  • Future prospect students come to visit our campus. They do not have internet access.
  • There is an international conference with several non-European scholars visiting. They do not have internet access.
  • There is a local conference with industry members. They do not have internet access.

Just like Public University libraries should be open to the general public, so should be the internet access (and the publication access it provides). Requiring authentication prevents several people from accessing useful resources. And having a guest username and password (common solution for events) is troublesome and has caused me problems in the past.

Besides the obvious blocking an user even if they are randomizing MAC addresses, what is the advantage of authentication in universities’ networks?

Ads:

Macbook Pro with TouchBar

Following on a previous post I have used 3 macbooks over the last month. The screen of 2011 macbook air died, and it was converted into my desktop at the new office (I am now with the University of Lisbon). Pretty much like Pedro Melo, I ordered a new Macbook Pro with Touchbar, but because I always max-out the RAM (blame dockers and VMs), I had a 3 week period with no laptop.

While I understand Apple’s desire to have 8GB as the default value for RAM due to energy consumption, it should not consider a custom model having 16GB on its high-end 13’' macbook pro. Luckily I unshelfed my dear 2007 black macbook to keep going to meetings and teach classes in the meanwhile. I would not spend 1000+ euros on a 8GB machine in 2017, let alone the 2000 they ask. On a side note, if I wasn’t expecting this new machine to last 5 years, I would have gone with the previous Retina Macbook, which was ~400 euros cheaper, with more or less the same specs. The main difference: USB-C.

When USB-A was introduced, it looked like a great solution. Specially because it was the only solution at the time and all vendors adopted it pretty quickly. Then over time, USB turned out to be 16 different things, which disappointed many users who had invested in USB-mini when USB-micro became the default (and EU) standard for phone chargers. Then there was 30-pin and lightning, thunderbolt 1 and 2.

I’ve decided to invest in USB-C, but I know the risks:

  • It will take at least 5 years for old-style USB to go away in external devices (Think keyboards, mice, pen drives, external drives, and many other esoteric peripherals).
  • I would have to spend money on dongles, either on duplicate or carry them with me at all times.
  • Everything USB-C is more expensive, due to technology licensing.
  • Within 5 years, USB-Z will probably be out and outdate everything USB-C, making my choice useless (what I call technological serendipity).
  • USB-C is no standard, its a connector for several standards, such as Power transfer, Display-Port, Thunderbolt 3, PCIe, etc..

My main concert is the combination of the two latest. When someone buys a USB-C cable, they are making several decisions: does that cable include power? Thunderbolt? Displayport? Which version of Displayport? This cable nightmare is already here, and I fell for it. I ordered a USB-C to Thunderbolt2 adapter, hoping it would work with my mDP-HDMI, mDP-VGA and thunderbold2-ethernet adapters. But this adapter only works with Apple’s displays. The cable does not support miniDisplayPort at all! I had to return it.

Overall, I ordered the following adapters with my macbook pro:

  • Apple USB-C to USB-C cable and Power adapter (came with the laptop).
  • Apple USB-C to VGA+USB-A+USB-C to take with me for classes and presentations.
  • Apple USB-C to HDMI+USB-A+USB-C for my home monitor, keyboard (which connects to my mouse), and power connector.
  • Apple USB-C to USB-A because I always like to have one official connector for devices that need their own port (External USB-3 drives).
  • Griffin Breaksafe – This replaces Apple’s cable for the power connection with magsafe-like capabilities. There are other brands with the same magnetic cable, but most of them have really low build quality. Even Griffin’s has connection problems (on the other USB-C side, connecting to the charger, not the magnetic connector). The end that connects to the macbook is really wide, which only allows for apple adapters on the adjacent port (this would suck if I have the non-touchbar version). I am not convinced with this cable, and I will only use it when I’m on the sofa or bed. By the way, it does not work with 15’’ macbook pro.
  • Dell WD1 dock with 2x USB2, 3x USB3, HDMI, mDP, VGA, Ethernet, Audio output and Audio in/output.

I considered ordering a second charger (I like to have one at home and other on the office, so I don’t have to carry it with me) and a usb-c to mDP adaptor, but the whole dock came at the same price. I haven’t received it yet, so I cannot attest for its quality. I really prefer to use display port over HDMI. Using HDMI on my my Dell U2414H shows little horizontal waves, while mDP works perfectly. It reminds me of the quality difference of DVI over VGA several years ago. Now I can finally have two 1080p external screens along my macbook. Something I could only achieve with a usb-3 to HDMI converter that my USB2 Macbook Air used to lousily drive a 17’' external screen.

And driving two screens was the reason I chose the touchbar version. While two ports can be multiplied with daisy chaining and hubs, the bandwidth available is the same. Just like several USB devices (either because of power or bandwidth) like external drives and chargers cannot work on the same hub, I’m not betting all my technology 5 years from now to be connected to just two ports. It was an annoyance on the Macbook Air, and even that model and another Thunderbolt2 port for the screen.

I understand the future is bluetooth and wireless. That might now be true for keyboards and mice, but I do not buy it for external screens, GPUs (I intend to buy one as soon as I get funded) and external drives. I am more comfortable investing in 4 expendable ports. After all, it is the main problem with current Apple machines: lack of expandability.

What really annoys me is the touchbar. As a general concept it is stupid. I will have the touchbar on my laptop, but not on my external keyboard or desktop machine. I pay for really-expensive Apple keyboards just to have the same feeling on all of my keyboards. But this is nuts! On a practical level, it is even worse. After 2 days, I had to configure the touchbar to have the same buttons that exist on a regular keyboard. I really don’t look at the keyboard at all while I work, and the colored tabs on safari were distracting. The worst part is the lack of physical touch. I have large hands, and I’m used to have my fingers on top of keys ready to press them when the time comes. On the touchbar, if you have your finger on top of it, it is pressing the key. I am trying to adapt the way I use my computer, because I am always pressing either ESC or F1. And if you are working in Vim, or you have modal dialogs open, pressing ESC unintentionally is rather annoying.

Overall, I am really please with the machine, but the main different for my 10-year-old black macbook is its snappiness, the screen quality (which is not that important to me, because I’m always on external screens) and the stupid touchbar. Maybe if homebrew supported OS X 10.6, I would keep working on a 10 year old machine. Processors have not evolved that much over the years, software has, and not on a good way.

Computing has not really evolved in the last 10 years

This year is the 10th anniversary of Take Off, a conference about innovation and entrepreneurship in tech. At the time, the Startup movement was growing in Portugal and several small companies were being started at the time. Some of them are no longer active, but that entirely fine. Nowadays the startup community is well established and there is plenty of funding for testing your business idea.

On the computing side, we now have Machine Learning, Deep Learning, cloud computing and raspberries pi almost for free. This is a fantastic time to be a computer engineer. Yet, I am disappointed for what we haven’t accomplished in the last 10 years. My main concern is the way internet services have became closed silos.

Most people have data on major sites like Facebook, Google and others. Sure Google allows users to export their data but the main problem is that people have their services at these sites. If I want to contact my Facebook friends, I have also to be on Facebook. This service lock-in results in the following 2017 chat service problem:

Almost 10 years ago I was dabbling with XMPP which was a protocol designed to support federated services. A Google user should be able to talk to a Facebook user or anyone who would run their own XMPP server. Just like you are able to call numbers on other mobile networks. Google dropped XMPP support on Hangouts, replacing it with a binary alternative citing performance issues (despite binary XMPP). Facebook deprecated its internal XMPP API. Microsoft removed the interoperability with other chat services. Now I am stuck with Facebook Messenger, Skype on all of my machines, Slack on my work laptop, Whatsapp, Telegram and a few others on my iPhone. I miss having everything on Adium.

The solution for aggregating messenger services is Franz. While Adium was a native app, Franz is a electron app which is a webkit wrapper around the web version of each platform. Nothing more than a chromeless browser with tabs. And this is representative of the way desktop computing is heading: apps are being written in web technologies for portability, and being wrapped in electron for desktop apps.

Here is a list of services I use and desperately need a native version:

  • Slack
  • Visual Studio Code (I am still relying on good old Textmate, but miss plugins for recent technologies)
  • WhatsApp
  • Messenger
  • Spotify
  • Jira

Maybe I am really old-school in my preference for native applications, that feel responsive, integrate well with other apps (drag and drop is a pain) and that boot in no time. Oh and that work offline, which is something most of them do not do.

I applaude Apple and Microsoft efforts in continuity between mobile phones, tablets, pcs, tvs, but application makers are not following suit. I still receive plenty of duplicated notifications across my devices and I cannot pick on one computer what I was doing in the other. Which sucks for my two-laptop system nowadays. But I’ll leave that for a second post.

Phones, Emails, Domains and how to identify yourself online

People don’t own mobile phone numbers. They are rented from mobile operators. Yes, you may be able to move “your” number between a limited set of providers – but it ultimately doesn’t belong to you. An operator can unilaterally take your number away from you.

Your domain is only temporarily leased from your registrar. Perhaps you forget to renew your domain. Or renewal prices will jump and you can’t afford your “home” any more. Perhaps a global corporation insists that they alone have the right to use your name and take you to court.

- Can I own my identity on the internet by Terence Eden

Self-taught developers

Source for first website – table based layout, a lot of view source, a lot of Notepad, a lot of IE 6. Used to work mostly in HTML and CSS. With the help from books like “HTML for the World Wide Web – Visual Quickstart Guide”, learned a lot as a tinkerer.

Two years in: good with HTML (table layouts) and moderate CSS (fairly new), basic PHP, could use FTP and do basic web config. Could get a site up and running from scratch. This was enough to get my first developer job. This was without any computer science background.

Now: front end developer with 10 years experience, not an engineer, or a code ninja. I don’t know Angular, React, WebPack. I don’t even know JavaScript inside out. I am valuable to my team. Need more: empathy, honesty, being able to see stuff from a user’s perspective.

Self taught developers today, via Tom Morris’ live blogging.

Back in my day, we learnt how to do things. Nowadays, kids learn how use high-level APIs, without any idea how things work underneath. They might learn Meteor, but have no idea about HTTP or Sockets or how HTTP Sessions are implemented. Which is fine for developing tiny little apps, but they miss the I understand all this sh*t feeling.

Supposedly high-level frameworks allow developers to write more complex programs in the same timeframe. However, I don’t believe this is true for small projects, because the setup time is increasing exponentially. Let’s start a new single page app, what do we need? Node, npm, webpack, angular or react or any other trendy framework. Say what you will about PHP, but it was a single one-click WAMP install away from your fingertips.

If you were a 13 year old kid wanting to develop your own app, what would you use?

Miss Peregrine's Home for Peculiar Children

The movie was so-so, but the time-travelling make no sense at all. Which means the authors do not care for geeks at all, because we all know how we love the logic used in made up scenarios.

Instead of listing how it does not make sense in this movie, I’ll just link to jws, who did the job for me.

But Eva Green is still hot as hell.

Devfest Coimbra 2016

Coimbra has been hosting several small but interesting events lately ( #1, #2 and #3 ) and one of the organizations behind most of them is the Google Developer Group, of which I am a mostly inactive member.

Google Developer Groups (GDG) are community-run independent groups, which Google sponsors in different ways (mostly event support and some gadgets). Since Google has no client support, nor developer evangelists in most countries, they outsource that job to the community for a very very cheap price. GDG organizers are people who would do something similar anyway, but use Google’s support to bring experts from other countries at a cheaper price. More later on this model.

Google has a budget for certain events, and this late in the year it was time for Devfest. In Portugal, each of the three big cities hosted one. I attended the Devfest Coimbra, and I was surprised how well the team pulled this event together in so little time.

Having a sponsor in Google (and many other local and international companies) allowed to bring people from outside Coimbra, resulting in a speaker lineup with only one local speaker. This is uncommon here, as we usually have local people presenting on technical subjects, given our rich talent pool.

There was a main track with talks, and a secondary track with hands-on workshops, which allows for a diversity that our typical events don’t provide. Almost half of the time I was in the third track: the hallway track.

Developer Stories – José Nunes talked about his two man maker company that builds custom drones. Although there was no business plan talk, he gave an overview of the drone scenario nowadays and projects they are exploring.

Filipe Barroso gave the most confusing talk about git ever. He seemed like he was targetting an audience that knew nothing about git, but expected them to know how it worked from an user point of view. He explained the blobs, commits and objects that are used internally in git, and tried to explain the merging algorithm, without going into details. He should have sticked to one level (beginner/intermediate/expert).

Progressive Webapps with Polymer – This talk had a similar issue. The speaker expected people to already know polymer, which was far from the truth. Thus, there was a big downtime installing polymer (about 45 minutes). Speakers should have access to the profile of registered atendees, in order to better prepare their talks/workshops. Additionally, pen drives with offline installers is also a must for workshops.

Celso Martinho expanded his lessons learned talk from take off with his new job at Bright Pixel.

Finally, Luís Silva gave a ballmeresque motivational speech about having a wonderful job that allows us to put our art in the hands of everyone on the planet. He also talked about his company supporting the code.org in two schools, something I’m a fond of.

Overall it was a good event (free food!). I think better speakers could have been found for some topics, but I’m also to blame, because I was asked for suggestions. Additionally, there should have been beginners/intermediate levels in the talk description.

Finally, while this was not a google tech exclusive event, there is an incentive to organize events around their technology. GDG groups should not have Google in their name, and IMHO they should organize any type of event, and Google would support the ones they have interest in. But giving themselves the GDG group associates them with Google for good and for bad.

Awake

Wonderful ad showing how much we are ignoring our surrounds, even if we believe deeply that we are not.

By the way, have you checked the SUR-FAKE project?

On Apple's innovation

Regarding the new Macbook Pro with TouchBar™, I would suggest you reading the review of the old retina macbook pro as if it was the new version. It looks so believable that it scares me. But I have some points to add to all the reviews that have been made.

First, despite needing an upgrade on my 4GB-RAM 2011 macbook air, I’m not leaning towards the new touchbar mbp. And I don’t known whether the fn-version can have the same specs maxed-out. My issue is that this is the first implementation of the touchbar. Within two revisions, the touchbar will be larger and it might go around the keyboard. Apple usually improves on the technology for these thingies, like they did with the iPhone and iWatch. Which will result in developer nightmare, just like when they introduced different iPhone sizes and resolutions. Secondly, this hardware will not be compatible with Linux, so you are effectively locking yourself to a macOS system, because no one in their right mind would use Linux without the function keys.

The other issue is the lack of headphone port on the iPhone. I have the 2011 mac and a 2014 iphone 5, both which have headphone jack issues. I connect headphones to my mac almost daily and two to four times a day I connect my car AUX cable to the iPhone. This has ruined the jack, and I need to twist the end of the cable to get stereo and microphone. And not all apple phones will get me microfone, it’s a gamble. So while I appreciate having the port for some unexpected usage (such real audio recording), I understand apple’s forced recomendation to stop using cables, specially if they cannot guarantee that they will work for 6 years (what I expect from them).

Finally, I am gonna miss mag-safe. Together with the touchpad it’s the two best things that came out of Apple. And I understand they want to go to the standard solution, but is it really a standard?

The core issue with USB-C is confusion: Not every USB-C cable, port, device, and power supply will be compatible, and there are many different combinations to consider. The newest, most full-featured devices (such as Apple’s brand-new Touch Bar MacBook Pro) will support most of the different uses for the USB-C port, but typical older devices only support basic USB 3.0 speed and (if you’re lucky) Alternate Mode DisplayPort.

Sobre a licenciatura do outro

A lista de governadores com falsas licenciaturas foi extendida recentemente. Rui Roque terá apresentado um documento com a média de curso, mas não necessariamente comprovativo de que a licenciatura estaria concluída. Tanto é ridículo do lado dele, como de quem verificou os documentos.

Mas do meu ponto de vista há uma terceira identidade que esteve mal nesta história: a Universidade de Coimbra. Citando do Observador:

Contactada pelo Observador, a Universidade de Coimbra escudou-se nos regulamentos, explicando que a universidade “não pode fornecer informações sobre os seus estudantes e antigos estudantes” e que “a única coisa que pode fazer é validar documentos oficiais, como sejam diplomas emitidos pela UC e confirmar ou não a sua veracidade.”

Que a UC se esconde atrás do regulamento não é novidade nenhuma. É prática normal na sua relação com os estudantes, mesmo quando o regulamento não faz sentido. E parece-me ser o caso. A UC deveria publicar uma listagem dos diplomas atribuídos assim que os alunos terminam o curso. Esta informação em universidades públicas não deveria ser escondida. Acredito que os docentes da universidade deveriam ter orgulho em disponibilizar esta informação, e não vergonha como acontece em casos onde os alunos terminam o curso sem o merecerem, por diversos motivos. E assim não haveria dúvida nenhuma de quem tem uma licenciatura ou não.

Mas a minha sugestão vai mais adiante: disponibilizar a listagem de alunos que concluem cada curso, com a média final e, ao estilo americano, o primeiro empregador assim que possível.

Do lado dos alunos, haveria um incentivo para terem a melhor média possível, para que os empregadores (que certamente iriam olhar para essa listagem) olharem para eles primeiro, antes de outros. Do lado da universidade, a informação dos primeiros empregadores incentivaria potenciais candidatos a ver a empregabilidade do curso.

Falam em transparência na administração pública, Open Data, e outras coisas. Mas nem as universidades, onde a investigação é feita, dão esse primeiro passo.

Pixels.camp voting

For those of you who don’t know, Pixels.camp is the largest geek event in Portugal, the follow up to Codebits. I love the event and I have given talks in previous editions.

The major component of the event is the 48h hacklathon. Participants team up to make any project they want, building up to the 90s pitch. During the pitches, participantes in the audience are voting for the project with a like or dislike.

This is actually a hard task, as authentication is required (to prevent people who are not in the event to vote) and not all the projects get the same number of votes.

So far the organization has used the likes – dislikes as a metric for ranking projects. Marco Amado proposed using the ratio of ( likes – dislikes ) / ( likes + dislikes ). I do not believe this is fair because a project with 5 likes and 1 dislike, ratio of 0.(6), would rank higher than a project with 100 likes and 25 dislikes, with a ratio of 0.6. I would consider the second project to be better, even if only by causing more positive interest. Using just the difference is more interesting in my opinion.

However, I agree that it is a flawed system. I end up dislike every project that, according to me, shouldn’t win a prize. This decision is based on hunches, because voting is real-time and the first projects always end up having higher rankings because there is no comparison at the moment.

My suggestion is to change ranking to have two different votes: like and love. Likes would work like Facebook likes: they would be used to increase the project author’s ego. Loves would be used for the final ranking, unlike likes.

Each user would be given 100 votes, which would be equally divided across their loved projects. Then it is a matter of making the math and ranking the projects.

This approach keeps the simplistic approach of binary voting, it takes into consideration how many votes there are, and it has the advantage of not having dislikes, leading to an happier event!

Vi Cristo duas vezes, uma em 3D e outra em 4D.

Soube que Portugal já tinha duas salas de cinema 4DX, uma em Almada e outra em Gaia. Margem sul de cada uma duas duas maiores cidades. Coincidência?

Ontem fui a Almada experimentar esse cinema e graças ao destino — hoje em dia ele dá pelo nome de Google Maps — fui parar ao centro de Almada e aproveitei para ir ver o único monumento, para além do Fórum Almada, que sabia que tinha: o Cristo Rei. Não sou religioso, mas lá fui tentar espreitar pelas saias. Subir custava 5 euros, pelo que aproveitei a vista de baixo que é muito parecida.

Christ upskirt

A photo posted by Alcides Fonseca (@_alcides) on

Depois fui gastar dinheiro para o Almada Fórum até serem horas do cinema. O filme em cartaz era o Ben-Hur onde, spoiler alert, aparece o senhor da estátua outra vez. O 4DX consiste em ver o filme em 3D, num ecrã médio e numa sala com poucos lugares. Uma coisa peculiar que nos assustou logo ao início: a sala não tinha inclinação quase nenhuma e apesar de ser alto, tinha cabeças a tapar a parte de baixo do ecrã.

O 4DX acrescenta cadeiras que vibram, sobem, descem e inclinam para os lados (o que deve resolver a falta de inclinação da sala), cheiros, ventoinhas, borrifos de agua e bolas de sabão. As bolas de sabão não apareceram neste filme, mas experimentem ir no Finding Dora que poderão ter sorte.

A vibração e movimento de cadeiras é o elemento mais usado. Eu tinha receio que fossem abusar desta tecnologia, mas a equipa de produção soube usá-la bem. Nas cenas de diálogo normais não era usada qualquer das técnicas. Mas nas cenas onde se cavalgava ou havia corridas de quadrigas, a sensação era poderosa. Melhor ainda, quando alguém espreitava por um balcão ou para uma paisagem enorme, o movimento suave das cadeiras dava um efeito ligeiramente vertiginoso que realmente enquadrava bem. Não vou spoilar o filme, mas os borrifos de agua foram surpreendentes e encaixaram nos dois momentos em que faziam sentido. Mostra que eles não usam as coisas só porque as têm à mão. Mas aquilo que nos fazia saltar de arrepios era o ar que as cadeiras lançavam junto aos pés (não tanto os das paredes). Nas corridas isto dava um extra de sensação que era realmente imersivo.

No geral, adorei o uso da tecnologia e recomendo vivamente para filmes de acção (e eventualmente terror para os corajosos). Os 12 euros valeram a pena, mas não é para qualquer filme e tem o bónus de não ter intervalo! Fiquei com alguma pena de não ter visto o Star Trek em 4DX, mas este se calhar até foi melhor.

Em relação ao filme, gostei mesmo bastante e recomendo a todos. Não é estupidamente longo, mesmo sem intervalos e o plot não é previsível, especialmente o final.

Spoiler Alert

Não gostei do final. Odeiei mesmo! Ele não tinha nada que se aproximar do Messala quando este tinha a espada na mão. Ou virava costas e se ia embora, ou o Messala largava primeiro a espada. Ou então o meu final preferido, o Messala usava a espada para se matar e não ser a Vergonha de Roma. Mas pronto, acho que esta versão não podia propriamente mudar a história…

Coimbra breathes geekness #3

Geek Stuff

  • Aug 21 – GDG gather hour II @ Fangas – Meet&drinks by the google developer group. Meet Carlos and guys and talk about android and other google tech.
  • Oct 1 – The Product Garage Party @ TBD – I wasn’t given permission to share what the event was about, but I’ll spoil you that it’s the kind of party early Steve Jobs would attend.

Training

  • Nov 5-6 – Inteligent Design @ EDIT Lisboa – Aritificial Intelligence and the Internet of Things in Product Design, presented by yours truly. Details soon to come.

Elsewhere

Despite all these events, there are two national events that are shadowing the community activity in Coimbra:

  • Oct 6-8 PixelsCamp @ LXFactory, Lisboa – Codebits is back under the Sonae brand. Same people, same spirit.
  • Nov 7-10 WebSummit @ Lisboa – I haven’t decided if I should go or not (I already got a ticket) because I don’t know if it is going to be 100% commercial, or if some people who actually work are also going to be there. We’ll see.

Suicide Squad

TLDR; Yet another Marvel fanboy bashing Suicide Squad.

Spoiler free

Suicide Squad is a lightweight super-hero movie with an ensemble cast. I’ve enjoyed but it’s far from a good movie. The plot is very straightforward, you can predict everything until the end, and there is almost no character development. Character-wise, the movie is focused on Will Smith’s character (probably who got paid the most). The adaptations from the comics were pretty much okay, except for…

... the Joker. I really hated this guy. Ben Affleck was a just-OK batman, but it did not suck as much as the J. First, they should be around the same age, with maybe Joker being a bit older. But this Joker looks like he is 30 at most, and Mr. Wayne is in his early 40s. This difference makes no sense to me. Secondly, his face tattoos and metallic teeth make him look like a rapper who sang with slipknot than a fearsome crime lord/vilain. He doesn’t strike me as a insane master mind. Just some rich kid who did too many drugs and with enough money to hire thugs to do the work for him. Hamil’s the animated series Joker is the best for me.

With light spoilers

The movie is filled with unnecessary characters (Boomerang and Katana, for starters) and the plot was made for the action scenes without any justification: 1. The witch could just have overpowered everyone in a matter of seconds anytime. 2. The USA could have nuked the place, instead of required a small explosive. 3. Deadshot could have destroyed both smartphones in a glimpse.

The amount of jokes is okay, but it is nothing like Deadpool or the first Avengers. I understand DC is supposed to be darker, but this movie is clearly made to be funny by design.

Coimbra breathes geekness #2

Geek Stuff

  • Feb 19-21 – Shift Appens @ Pavilhão de Portugal – Hacklathon!
  • Mar 4 – Coimbra Codes #3 @ Nest – Monthly informal meeting where the Coimbra entrepreneur community meets.

Geek Culture

Coimbra breathes geekness #1

Geek Stuff

October 30 – Coimbra Scene Meetup @ Nest – Monthly informal meeting where the Coimbra entrepreneur community meets. No talks!

October 31 – Robotics Introduction @ Casa das Artes – The wonderful Artica will given an introduction to Robotics using their own robots. (45 euros)

November 6 – UnixDEI @ DEI – A Saturday of talks for the unix geeks all around.

November 21-22 – Portuguese Hacklabs Meeting @ Casa das Artes – The Audiência Zero gathers all their Coimbra, Lisbon and Porto labs in an open weekend to present projects, share knowledge and develop new ideias.

December 4 – Coimbra Java User Group Meeting @ DEI – Introduction to Maven (by Sérgio Ferreira). It’s a must for those of you who want high paying corporate jobs.

December 4 – Coimbra Codes #2 @ IPN – Three 20min talks: Graph Databases (by my former student Pedro Paredes), Ansible and AWS deploys (by Vasco Pinho) and Property-Based Testing in Scala (by my former advisor Paulo Marques). I am also an advocate for property-based testing, although I believe Haskell works bests for this purpose.

December 11-13 – Ludum Dare @ Casa das Artes – 72h gamedev marathon competition.

December 12 – Portugal Google Developer Groups Summit @ DEI – Developers of Google-based services from Portugal, and maybe some from Spain will meet in Coimbra, with some Google developers coming as guests.

December 16 – Android Talks #2 @ Nest – Three 20 minute talks (I see a pattern here) about Android Development. If you are an hardcore developer, you’ll feel at home. If you are starting, you will get the idea anyway.

January 29-30 – Global Game Jam @ Casa das Artes – Another 72h gamedev marathon competition.

In the last month or so, we’ve had: Coimbra Codes #1, Android Talks #1, Portugal Virtual Reality Meetup and others…

You may have noticed that we are going to have two gamedev marathons very close, which may mean that we will not have our local version dubbed The Game of Games with local prizes.

Fun Stuff

Every Thursday – Boardgames @ Casa das Artes – Carcassone, Catan, Ticket to Ride are only the first games that will get you into economy, bluffing, pushing your luck and global world domination.

November 1 (First Sunday of each month) – RPG meeting @ Casa das Artes – Let’s play short RPG adventures on a rainy Sunday afternoon

October 31 to November 29 – Ano Zero – A whole lot of art expositions and performances.

December 4-6 – Comic-Con @ Porto – Not in Coimbra, but big and close enough to be mentioned here.

December 17 – StarWars Episode XII – The Force Awakens

I intend to make this a periodic thing. If something is missing, please let me know.

Sobre a Avaliação da FCT

Foram revelados os resultados da primeira avaliação da FCT por um painel externo desde 2005.

De realçar que o líder do painel foi membro do conselho directivo da ESF até 2013. ESF essa a quem a FCT encomendou o corte de 50% na avaliação das instituições de investigação portuguesas, e que esconderam esse facto do público. E quando dizem que esse mesmo elemento alega que “A decisão de contratar a ESF foi muito boa”, não há senão um elemento de ridículo.

Relativamente aos resultados, defendem que a FCT deve gradualmente extinguir as bolsas individuais a favor dos programas doutorais patrocinados, e bolsas previstas nos orçamentos dos projectos de investigação. Em relação à primeira solução, acontece que várias áreas ficam sem financiamento, porque não são vectores estratégicos nacionais, independentemente do aluno até ser o melhor da área e ter um projecto que faça todo o sentido. Vamos ter uma levada enorme de biocientistas quando outras áreas ficam para trás.

Em relação às bolsas pagas por projectos de investigação, há o entrave de que o projecto de investigação pode não estar totalmente sobreposto com o período em que o aluno está a fazer o doutoramento ou pode ter uma curta duração. Poderia ao menos existir a possibilidade do doutoramento continuar a ser pago, mesmo depois da data de final de projecto? É que este modelo já existia anteriormente, e optava-se principalmente por bolsas individuais. Até porque essas pagavam as propinas de cerca de 3000€ anos, e as de projecto não.

Mas o motivo para quererem acabar com as bolsas individuais é porque a FCT não tem recursos para processar tantos pedidos de bolsas individuais. Quando há anos que os bolseiros se queixam destes atrasos, é ridículo que a FCT precise de uma comissão externa para lhes dizer isto. Foi mau (péssimo, aliás) planeamento por parte da FCT. E considerando que há apenas uma chamada ao longo do ano, concentra os esforços que deveriam ser melhor distribuídos, numa altura em que durante semanas, a FCT não atende o telefone (eu tentei, acreditem!). E quem saía prejudicado eram os alunos que muitas vezes tinham de esperar até 9 meses após a conclusão do mestrado até poder ter bolsa de doutoramento.

Outro dos pontos que defendem é que “as bolsas de pós-doutoramento não devem exceder 3/4 anos”. Na minha opinião as bolsas de pós-doutoramento nem deviam existir. Os projectos e instituições deviam ter orçamento para contratar investigadores. Com um daqueles papeis que estabelecem uma relação de natureza jurídico-laboral, do tipo que a FCT tem medo e luta para que as bolsas nunca o tenham. De forma a que possam pagar impostos como as outras pessoas, e ter todos os benefícios correspondentes. E já que nem consideram os doutorados trabalhadores (apesar do painel avaliador o fazer, ao dizer “... os salários de doutorados…”, quando eles recebem bolsas e não salários), muito menos consideram os alunos de doutoramento, que trabalham a full-time após o primeiro ano, mas mais uma vez não têm emprego. Andam até aos 35 anos com bolsas, um seguro social voluntário e sem pagar impostos. É assim que tratamos a população que acabam os cursos com as melhores notas. E mesmo isso penso que está a acabar. Da realidade a que tenho acesso e que tem procura, os melhores alunos (e mais competentes) estão a sair para a indústria, deixando a Universidade com os alunos mais medianos. O que era exactamente o que precisávamos. Estatísticas dessas não vi eu no relatório.

Concluindo, a avaliação foi algo que se focou em procedimentos e não em conseguir resultados. Não se falou da ligação com a indústria, nomeadamente fomentar parcerias que financiem projectos. Não se falou na captação de talento, nem nacional nem internacional. Não se falou do estatuto do bolseiro, nem dos valores das bolsas. Apontou-se um erro que existe há mais de 10 anos na FCT, e que a Associação de Bolseiros não faz mais nada senão falar disso. E o único motivo para apoiar a correcção dessa medida é a falta de recursos humanos para fazer uma avaliação anual, de algo que nem devia ser anual.

Farewell passenger!

6 years later and two VPS later, I decided it was time to leave Phusion Passenger.

I was experience several downtimes due to database (both mariadb and mysql) crashes. It turns out the database could not recover because ruby was occupying the whole memory (512Mb on DigitalOcean). The solution was to shut down apache, then start the database, and finally start apache again. But this would only work for a couple of days.

I decided to move all my django sites to WSGI (yes, I was deploying django apps using passenger) and ported a sinatra app to django (thanks to legacy database support) just so I could get rid of ruby.

It all works well now. But in the meanwhile, I went to read Passenger’s source code and I am currently considering forking and removing everything ruby from it. Just an apache2 module that would automatically handle wsgi, avoiding 4 lines of apache config per project. It can be done, I am just not sure it is worth the time. Maybe it would help other people with plain-simple deploying on Apache/NGINX?

BQ Aquaris 4.5 and adb

If you connect a Bq Aquaris 4.5 (or any other model from bq) to your windows/mac/linux, it won’t show in adb, Android Studio or Xamarin.

The solution is to add the VendorID to a certain file. My VendorID is 0x2a47 and you can get yours from Apple Menu > About This Mac -> System Report -> Usb -> Bq Aquaris 4.5 -> VendorID.

Then append that Vendor id to the following file: ~/.android/adb_usb.ini. Restart your adb server and it should work.

sudo adb kill-server; sudo adb start-server