elega Elega Corporation

The Future of Elega Corporation




For those who have been following since the company's beginning back in 2017 (thank you to those of you who have!) you probably know that for a long time I have been talking about researching some fundamentals about computing technology. In past months, there have been discussions around: what sort of things can be done with old, slow computing technology, instead of hammering out the cutting edge trends that may exist on graphical processing units (GPUs)? In previous blog posts, I have discussed things like the Pico-8 fantasy console, the real numbers behind even the very first personal computers, and some of the historical trends around microprocessors.

Indeed, most companies do not examine this area very often. One company that is famous for doing so is Nintendo. Nintendo is famous for finding new ways to use older technologies. It has not always worked for them, such as in the case of the Virtual Boy console. But surprisingly, it often does work out in remarkable ways. There is nothing particularly impressive at all about the specifications of the Game Boy, for example, and even at the time it was released the little handheld machine was not using what would have been considered modern hardware. They again have repeated this with the Nintendo Switch, although to a lesser extent: the Switch uses what at the time of its release is an outdated system-on-a-chip (SOC) from NVIDIA. The secret sauce of the Switch, so to speak, is that it finds a different way for people to interact with the content being powered by that SOC. To be fair, Nintendo is an entertainment company that, throughout its history, has made a lot of its money on what are essentially toy products: even back in the late 1800s when they were mostly making playing cards.

The truth is: there are a lot of ways in which non-entertainment related products and services could benefit from finding new ways to apply old ideas.

Jonathan Blow, an indie game developer behind games like The Witness and Braid, pointed out in a recent talk that humanity has had a history of doing remarkably impressive things with technology, and then periodically stepping backward because it may have failed to carry on knowledge across generations. Especially when knowledge involves high complexity, elaborate science, or complicated engineering: it is easy for its implementation at a high competency level to be lost over time. He quotes Elon Musk, the CEO of Tesla and SpaceX, as pointing out that technology does not automatically move forward just because that is what a trend line shows. It takes real people putting in intense work to move technology forward. And I think these points are accurate - I highly recommend watching the talk and think it should basically be required viewing for all software developers.

This notion of a across-time brain drain is pretty terrifying because it is suggested that in the short term: companies like Intel, AMD, NVIDIA, and Qualcomm might just remain kings of microprocessors while in the long-term: their products will slowly decay in quality as they fail to press knowledge on to future generations. If no new meaningful competition can ever be allowed to emerge in the microprocessor space: it stands to reason that there is little incentive for innovation or preserving reliability.

In practice: how could this possibly happen? Well, from my perspective: I see it being possible through a number of ways. The first way that comes to mind is that the culture of software development, particularly in the business application development space in which I spend so much of my daily time, has already begun to shun away compiled, lower level languages with disgust. There are a giant number of people who will discourage you from learning C, C++, or Assembly if you are not required to learn it while pursuing a computer science degree. I have even experienced this when talking to colleagues about my interest in understanding more about lower level fundamentals: they tell me I am wasting my time in terms of how it would apply to my career.

Unfortunately, they may be right on this point of having little to no hope in terms of these skills benefitting a career: most large companies are abandoning on-premise architectures, compiled languages, or the need for lower level understanding of computing. Fewer and fewer companies have any server rooms at all. Most software written for business today, by business, is usually done using JavaScript, Python, C#, or Java: none of which are particularly impressive compared to the efficiency possible with the original family of compiled languages.

Another way that this eventual collapse could occur is from mainstream academic institutions abandoning the teachings of computer science fundamentals. This has already begun to occur to some extent: many schools are choosing Python or Java as beginning languages to teach instead of teaching the more verbose, classic compiled languages. There are arguments for this, and I would agree they are legitimate arguments: doing some of that other stuff first can be insanely difficult for folks to wrap their minds around. The trouble is really just that many folks will stop at the high level.

How do I know they might stop at the high level? Because I have seen colleagues do it already. These are adults who have been doing interpreted languages into their 40s and they would not be capable of doing any of these deeper fundamentals if they were required to. And the marketplace really does not punish this at all because there are virtually no jobs out there offering to pay high dollars for lower level work.

Moreover, one other significant way this can occur is inside company cultures. I have already mentioned through both of the previous major points here that companies are not hiring folks to do lower level work, they are moving to the cloud, and they are leaving the management of their architectures for others to manage without requiring themselves to have a deep comprehension of the fundamentals that their enterprises rest on. In my opinion, this is an ugly form of short-term thinking because it insidiously masks the long-term picture: the inability to be self-reliant over time, the gradual increase of technical debt from a lack of understanding, and what is probably also a gradual loss in speed efficiency because you are endlessly adding third-party abstractions, APIs, and other mechanisms. All of these third party offerings then build upon other third-party offerings and so on: the end result being a bloat that is seldom thought about adequately, and a loss in possible efficiency that creeps up over 5 to 10 to 30 years.

For some companies, this decay has already happened repeatedly. Bizarrely, the decision made by management teams following groupthink is usually to dismantle an existing architecture built upon this inefficient, shoddy foundation of snapping things together quickly with figurative scotch-tape, and then deciding to build a brand new thing with a brand new third-party offering which then starts the cycle over again. Along the way, most of these companies are not gaining much new functionality of benefit from this. At the end of it all, they have not automated many processes beyond their tendency to add often more unnecessary steps to processes, or their inability to manage complexity to the point where the system begins to grind to a halt. This point is important too: the notion that even as engineers are automating what used to be manual processes, humans see what the computer can do and decide they might as well pile bonuses on top of all that because well, we do not have to do data entry or manual validation of data anymore.

By now, hopefully I have painted the picture for you: there is a real danger here of calamity and disaster. It is easy to imagine that these problems can build over time to create serious hurdles that may be difficult to back down from. This is not counting many of the other examples given in Blow's talk regarding what has happened over time with operating systems, drivers for hardware, performance, or just the general prevalence and acceptance of bugs in software. The attitude for businesses is to jump into a river of agile methodology where, regardless of what the original folks behind agile really meant, companies will choose to intrepret what they would like to about agile and just ship the product no matter what it is really like for users. We can fix it later, they say. Crawl, walk, run is a mantra I have often heard repeated, and then I see that nothing but crawling is done after we all utter the sentence.

How Do We Solve This?


There is no one answer that can easily be concluded. There is bloat and problematic issues that have arisen across the computing spectrum. Part of what I have described here is really a people problem: folks have been convinced that compiled, low level work is a waste of time as it applies to the every day problems business faces (which makes up probably more than 70% of all the software development jobs out there). Folks have created a culture that has decided that you are really better off and more productive if you never try to manipulate memory on a machine, or optimize software to death. Yeah, you might have to deal with a 30 second to a 2 minute loading time sometimes, but you know: that is life, deal with it.

This culture needs to change. And really, it has to change with individual folks standing up for these ideas in a coherent and articulate fashion. If you still feel unsure about any of it: I recommend you just keep quiet and keep reading and learning for right now, as that will be better for altering the culture anyway. Being uncertain or not being able to substantiate why doing more with low-level work will only harm the presently established thinking.

Curiously, this same higher level, abstracted language culture does not exist everywhere. Dan Saks gave a talk at CPPCon back in 2016 that describes how, with embedded systems engineers, more of them have been migrating to C over C++ because well, C++ is just not low level enough for them, and similar talking points. He describes that, in persuasion: if you are arguing, you are losing. In other words, no matter which direction the industry is to go: you cannot push people into following a trend with any sort of forceful pressure. It will not work. They need to understand why, and they need to want to go a particular direction.

In addition to culture, the proof has to be there as well: as an individual you should be building things and expanding your skills with low level work. And you should probably do this outside of your day job if you have time: make mistakes at home, and expand your skills there. For some of us, this might mean doing things more directly related to hardware. It might mean better understanding CPU or GPU chip designs, or learning C and C++ for the first time, or doing a general purpose input/output (GPIO) project on a Raspberry Pi, or writing some Assembly, and the list goes on. We need to be begin being studious ourselves if we expect others to follow this path.

The biggest advantage here is that there is real money and joy at stake. Gaining 30%-300% efficiency gains over slogging, sluggish solutions will save a ton of cash for companies. They have to see the practical benefit. Too often, this supposed debate that exists among pragmatists and purists is a false dichotomy: sometimes taking a more purist approach is the practical decision and there is no tradeoff to be made other than benefit. Of course, that is not ALWAYS true, and hence the classic debate can continue. But chances are that in any given organization, there are key areas where getting into the gritty, low level optimization on certain programs will do great things for the bottom line, and for the quality of life of employees.

Steering Elega Corp Based Off This Research and Reflection


How does this translate to what to work on as an individual? How does this pertain to Elega and its current projects? I am continuing to think deeply about this and continue pursuing the direction. What I have found historically is that I might be making mistakes by rushing into new projects so quickly. I often find that I am unprepared for certain parts of these projects and it costs me in time and in bad implementation, or lost opportunities because I am busy working on something else. Part of the reason Age of Nomads, the real-time strategy game that I was working on, has been put on the shelf is that my skill set is not quite where it needs to be to do the product justice.

In this way, I would like to keep my options more open-ended and any projects I may greenlight to be on a pretty small, simple scale for right now. A lot of the early phases of reading are beginning to sunset now and it is time to start getting some hands-on experiments going in the near future.

In general, there are four key areas I will be focusing on putting hands-on practice into that I mentioned not long before writing this post on Twitter.

1) Build a 2D game engine - It turns out that doing so does not seem terribly difficult for my skill level, although it will certainly be time consuming and present some unforeseen challenges all the same. I will probably do this using C++ and Simple DirectMedia Layer (SDL).

2) Build a simple operating system - certainly will be more of a challenge but will be an exercise in thoroughly understanding what is going on with the machine.

3) Create a breadboard CPU - A lot of people have done similar projects but I feel I need to do it as well. Here is one example of someone else who has created their own breadboard 8-bit CPU.

4) Move toward getting away from the Unity engine - Yep, I am going to largely abandon Unity at some point, albeit not completely. There are a small number of projects that are currently in motion that are utilizing the Unity engine for a number of reasons. I spent many hours learning a lot about how to use the engine over the years but it is time to move on. Depending on how well the 2D game engine comes out, I may transition to that for certain projects or I may turn to another engine like Godot.

I should point out that these plans are really for what to do throughout 2020 and beyond: 2019 goals and projects are already in motion and will need to be completed. This means that Pluralsight course updates and new courses, Kalling Kingdom, and a couple other important projects will need to be completed first.

But, the same way that Kalling Kingdom was kind of a background project for a number of months before becoming an active priority: these other projects will probably begin to receive some hours in hands-on time to get the ball rolling with learning and adapting my skill curve for them.

For everything currently in motion, that about sums it up: I am now beginning to dig in to what could end up being a long-term direction for the Elega brand over the course of the next 5 to 10 years. It will be interesting to see if any of this changes over time, as it easily could if, as I continue to learn, my perspective again shifts somewhere else.

At this point, this low-level interest and direction has continued for over a year now. And there are grand problems to solve using its approach.