Me at war with abstraction layers

Hi: As I guess is obvious I’m just me. I can’t think of anything creative to call myself so I decided to part the curtain and be just me. I’ve been thinking how abstraction layers in computers have been the bane of our existence. They obscure what is really going on. We pay over and over for new or different abstraction layers that are heaped on top of other ones and the evil folks jack with these various abstractions to screw with our systems. Plus the pile of abstraction layers that are systems like Windows 10 are so huge they make most hardware unusably slow. Anyway in an effort to have one less personal abstraction layer HI! Its just me. *Waves like a mad man! Hey ya! How ya doing? Yes I’m wearing pants…

I like your point. On the other hand abstration layers (and abstractions) are needed to handle and understand complex systems. In the end reducing abstraction layers is a way to make systems less complex. This is a clear case of “rightsizing”.

Hi Hal, nice to meet you.

I will use Java as an example, as its the most familiar to me and is an additional abstraction layer, on top of all the other ones.

I personally like abstraction layers.
They allow you to use a feature without knowing the inner workings of it. Java allows me to write a program, that works on a smart tv-remote as well as a windows or linux desktop. I don’t have to worry about a packet having different maximum sizes or system specific things like that. I can just open a connection and read the data from a stream and it works.

Another example is, if you want to draw a GUI window in any programming language, you just make a few calls to the operating system and it will arrange your window, keep track of its position and button presses, etc.
If you wanted to write the same without any abstraction layers, you would have to literally create a file that your pc will be able to boot from, you will have to write it all in the soecivic assembly that your cpu is programmed in, handle all possible events youself, handle network drivers, network packets, manage disks, write code of every single file-system that you want to support, write your own graphics drivers since they are also just abstraction…
All in all, it would take you ages to write just a simple program.
I think any of the things I mentioned have taken more than a lifetime worth of programming hours to get to where we are now. So with all likelyhood, you would never finish a simple program that reads a file from a disk and sends it over the network.
Another side-issue is, that without an operating system, you would have to reboot your computer every time you want to switch the programm you awant to run, because there is no operatong system that manages everything.
The drivers of your operating system (another layer) basically allow you to not worry about the hardware you are running on, which means, unless you need to interact with very specific hardware, you don’t need to worry what keyboard your keypress comes from or what type of camera has been plugged in, because the driver just provides a video-stream that you can read from.

There are of course not only benefits to abstraction layers.
I can understand that you don’t like abstraction layers, as they can reduce performance in some cases. Because that is basically the tradeoff we made: Pay some performance, get lots of usability.
But for most usecases, even our abstraction layers are optimized enough to allow you to not even notice they are there.

And if you want to do something very specific, you can still just go and inline some assembly and either run it in a C/C++ or put it into a library and call that library from java.
You will have to write this library for each operating system and cpu architecture, you want to support, but that is the tradeoff with abstraction layers.
After that, you have the best of both worlds: The performance for specific, rare usecases and the convince of a short, human readable program, that works everywhere (in case of Java)or only needs minor adjustments (in case of.most other system-specific languages).

Overall, I am much in favor of the abstraction layers we have today. A lot of things would have to be reinvented every time, if we didn’t have them.

While it is nice at times to look under an abstraction layer to see the underpinnings, I definitely have to agree with the idea that they simplify what we do. Imagine just how much more difficult it is to have to know whether you are reading from a hard-drive (and is it an old IDE drive, a SATA drive, SCSI, or a CF type device) or a serial port, or something else. I have had to work at those layers, where for example, to write a 10 byte stream in the middle of the file, you have to read in the containing 4K block, issue a command to erase that 4K block, and then write the modified information back.

As a more recent one for me, at my previous contract, we would be provisioning network circuits of all sorts for customers all over the mid-Atlantic area. And I cannot imagine doing this sort of thing in any sane fashion for users, much less for developers, if we had not had the abstraction layers we did. Multiple communications protocols, four major vendors, a dozen different hardware versions… and yet, all our customer teams had to do is pick a device in a drop-down, pick an assigned port, change a few other settings (such as ADSL/VDSL/FTTP, the speeds) and save.and those layers did their thing so that the customer’s circuit was up (or down, depending on the task). The only thing it could not do is put cross-connects into place, or physically install any other hardware. And if they needed to check a circuit status or verify it’s provisioning… no need to know which screens, commands, or MIBs, as we had handled all that in development, including mapping vendor specific values to common forms.

Abstraction layers are necessary though.

I mean yeah there’s a lot of them and certain OS have too many cough Windows 10 cough but if you don’t have them then you have to custom-build whatever it is you’re trying to do.

Yeah, you get one hell of a performant machine after you strip away all the garbage, but it also costs hundreds if not thousands of times as much to produce.

Take a game like pong, for example. You can build it with about maybe 20-30 electrical components and build it in such a way that, barring a mechanical failure of buttons, it will last thousands of years, but you also probably have to spend hundreds of dollars, thousands even, to get your pong machine.

Or…you can open a text editor on your machine, write about 100 lines of javascript and html, and boom, pong game for basically free. The later relies on hundreds of abstractions though…

Code for early video game consoles were almost all assember language without any abstraction layer because the CPUs were just not fast enough to do the work in any other way. In regards to the comment about writing to a GUI abstraction layer I understand the point but the reality… ugh

Humm… Well its like this (I think) you write your application/program in whatever language you choose (The popular one seems to change regularly) this calls some kind of application toolkit like Cairo which talks to your window manager like GNOME or KDE which is calling GTK (or QT or?) This then calls the Window Manager such as XORG or Wayland etc which calls the Linux Kernal driver which call the hardware drive Nvidia or Intel maybe AMD and my point is that every layer (6 or 7 I’ve lost count and I’m not clear if Wayland calls anything in X all I know is it seems not to work) Whatever it is, it leaves 99% of the programmers totally in the dark as to what is happening to draw a line or print a character of text. And while some of the layers exist in code you “could” find source to few to none of its written with enough comments (My Opinion) And don’t start me on “self Defining code” or worse “Intentional Obscufication”

This reaches a level of abstraction so deep that a microprocessor that is far faster then any
“super computers” that needed liquid Nitrogen to run is rendering most computers (only a bit old) into dead slow hunks of unusable garbage computers. I run apps that I can type faster than they display the text. When I do screen scrolling I sometimes I see weird junk flashing and parts of what is supposed to display just don’t work. I’ve got storage (Hard Disc) that is both huge and hardware speaking unreal fast (to me) yet the drive is grinding away all the time and when I want my program to access files on it unknown processes are so busy the poor thing goes mental when I access it. Almost all of the programs (Apps) that I see no matter how simple are megabytes in size.

If the point of all these abstraction layers was that they were written to make your program as fast as the hardware “could be”? THAT would be something to get behind. Just take the Disc I/O speeds. We can look on some datasheet the specs on how fast the drive does reads and writes yet how many are providing data at anything even close to that speed? I would be more understanding if the computer in question was serving a bunch of users but… Its just me!

If the millions of instructions per second that a modern CPU were only dedicated to doing a few less things in a bucket brigade of huge time wasting code… SIGH I keep telling myself but everyone is convinced that there can be nothing better other then to lump a new abstraction layer on top of the existing one. Or the Operating System needs to be made more huge on a regular basis yet seems to accomplish very little new and wonderful. Just different and often more obscure.

Am I sounding like an old fart? ha ha…

Its a really old saying but I think, “Too many Cooks spoil the broth” is still as true now as it was.

Hello, and welcome to the Forums