Response to “Linux vs Windows”

In this blog post, I will be giving a brief response to a section of an online article I found. The article claims that Linux is “more secure” than Windows, but this is of course not accurate. The article can be located at: http://techluminati.com/operating-systems/linux-vs-windows/

Please note that I mean *no* disrespect to the author of the article.  Please also note that I am just going to reply to the part of the article that talks about Windows’ security. I’m not replying to the whole article.

The direct quotes from the article are in red, and my responses are in black.

Linux on the other hand , has always been a secure operating system since the early days. It has often been the subject of debate that an open source operating system cannot be as secure as a proprietary one, but Linux has proved  that belief to be untrue. Overall, I believe that Linux offers much more security by default.

This is not true at all. Linux has had its fair share of security vulnerabilities too. Also, Linux was not “always secure from the beginning” either. Linux itself had to undergo a lot of security patches to get it where it is today.


Overall, I believe that Linux offers much more security by default.

Not really. A default install of Windows Server 2012 R2 and CentOS Linux will have similar security defaults out of the box.

As the links will show you, there have been several security problems discovered in Linux. Why people insist on saying that Linux is “more secure” than Windows is beyond me.


Access Privileges – Linux by default does not run as a root (the Windows ‘administrator’ equivalent) This ensures that any automated program or script cannot make changes to the system without explicit privileges from the user.

Windows (since Vista) does not let the user run as Administrator by default. The default Windows user has to press “Yes” on the UAC prompt to gain Administrator access. Otherwise the user is still a “limited, non-Administrator”.

Linux (specifically Ubuntu) does something similar. The default user in a Ubuntu install is a limited account too (like Windows) with privileges to access root if the user wants to, but instead of having to click a “Yes” button to gain Administrator access, the Ubuntu Linux user has to enter their password instead.

I suspect Microsoft opted for clicking a “Yes” button for more user-friendliness. However you can have Windows force users to enter their password, instead of just clicking a “Yes” button.


Although Windows has implemented a similar mechanism called ‘User Account Control or UAC’, Which does provide good protection although not as robust as Linux does.

You claim that UAC is not as “robust” as Linux (I assume you are talking about “sudo” in Linux). This isn’t true. UAC is basically doing what “sudo” on Linux does. Allowing a Windows user to elevate him/her self to Administrator, without having to be Administrator all the time.


Viruses – Viruses and other malware continue to be a constant headache for windows users. Combating viruses is not only time consuming, but also expensive when we talk about using Windows in a large scale production environment. Moreover, there is always a need to purchase expensive antivirus software with yearly subscriptions, punching additional holes in your pocket.

Linux on the other hand has significantly less number of viruses, so you are considerable less likely to get infected. In fact, I am yet to hear this from a friend or a fellow systems administrator, that they are using Linux, and that it has been infected! am sure most administrators or users  must have had a similar experience.

Linux does have malware. It is rare to actually get malware on Linux, but the same goes for a properly setup Windows computer with a user that uses common sense. Just because someone uses Windows does not mean that they will catch malware, nor is Windows typically easy to infect.

I would say 99% of all Windows infections nowadays are caused by the user allowing the malware to infect the system (e.g., running an infected program as Administrator, opening an e-mail attachment manually from an unknown e-mail, running random downloaded exe files from the Internet), not the malware just “getting in” by itself without accidental help from the user.

Also, malware for Linux can be just as dangerous as Windows malware. For example, someone writes a shell script for installing…say…a media player for Linux. Well “John Doe” (our average computer person we are using as an example) downloads and then runs the shell script, using setup instructions on the author’s website. The script informs John Doe that he needs to run the script as “root”.

John Doe then says to himself “I want to use this media player, so I’ll go ahead and login as root”. John Doe logs in as root, then executes the installer again (on Linux).  What John Doe does not realize is that the installer also contained malicious code to create a user account (for the hacker) as well as a small SSH service that allows the hacker to gain unauthorized entry into John Doe’s computer.

Now did Linux magically prevent the malware from infecting John Doe’s computer? Of course not. Neither would Windows, if that setup had been for Windows. Whether the installer had been for Windows or Linux, the malware would have needed the user to perform a risky move (running the setup as root on Linux) to infect John Doe’s computer.

Now not all malware requires the user letting it through, but I would say most of it does.

Quick Note:  Running something as “root” on Linux is the equivalent to running something as “Administrator” on Windows.

Also, FYI, it is extremely rare to catch a virus by just being connected to the Internet (that goes for Windows, Linux, or any other operating system).


Overall Security – Overall, I believe that Linux will always be much more secure than Windows operating system given the fact that its open-source. It would interest you to know that there is something called as the ‘Linus Law’ – named after the creator of the Linux kernel Linus Torvalds , which states :

“given enough eyeballs, all bugs are shallow”

That quote is really a myth. If anything, there would be so much code (like in the Linux kernel) that no one could constantly go through all of the code to make sure that no “monkey wrenches” have been thrown into the works.

Not to mention all of the Android malware that exists.  Remember Android (which is what I use on my phone) is Linux, and being Linux has not stopped malware from infecting people’s phones.

Here is a list of Android malware out there now: https://forensics.spreitzenbarth.de/android-malware/

Technically any Android malware *is* Linux malware.  I suspect a lot of Linux users have never thought of it that way before. Basically Android having malware completely disproves Linux being “inherently secure” (who started that myth anyway?).

Now I am not saying that Android malware will magically work on a CentOS web server, nor am I saying to just forget using Linux.  What I am pointing out is that Linux does indeed have malware, and saying that it doesn’t is false.

Also, my Windows server has been semi-frequently targeted for the ShellShocker vulnerabilities (even this late in 2015). This tells me that there must be Linux systems out there still vulnerable to ShellShocker, otherwise the attackers would not bother anymore. At least since my server I use runs Windows, I am not vulnerable to the ShellShocker vulnerabilities.


In simple terms it means given a large number of developers and beta testers, every problem will be identified quickly and that the solution of that problem will be obvious to someone. I completely agree with this.

I respect your opinion, but I respectfully disagree. There would be so much code (like in the Linux kernel) that no one could constantly go through all of the code all the time.

Think about it.  Someone sneaks in a little bit of malicious code (say…inside a large open source project) that deletes *all* the user’s data (does not require root privileges). Now unless someone is constantly going over all of the source code for that project, they may very well miss the malicious code. It just takes it happening one time in a large open source project to cause a large mess that would be very hard to clean up.

Also, there really is no hard evidence for “open source  ==  more secure”. Neither is there hard evidence for “closed / proprietary ==  more secure”  either.


Posted in Computers, Operating Systems

Difference between TCP and UDP

If you work with computers, you probably have heard about the TCP and UDP protocols. While they both are mechanisms to transmit data to other computers, they do not operate in the same manner. Below I’ll show you some differences between the two data protocols.

TCP (stands for “Transmission Control Protocol”)

  • It’s a connection-orientated data protocol
  • TCP is best used for applications that require high reliability
  • There is more overhead (more computer resources used) when using TCP
  • Other protocols such as: HTTP, HTTPs, FTP, and SMTP make use of the TCP protocol
  • TCP makes sure that the order in which data is received is the same order in which it was originally sent
  • TCP is typically slower than UDP
  • TCP allows for “flow control”
  • TCP checks for errors in the data transmission
  • TCP acknowledges segments
  • TCP has both error checking and options to recover in-case of an error

UDP (stands for “User Data-gram Protocol”)

  • Not a connection-oriented protocol
  • UDP is useful for applications that need fast transmission of data (regardless of data integrity)
  • Less overhead when using UDP, since UDP is a connectionless protocol
  • Other protocols such as: DNS, DHCP, and VOIP make use of the UDP protocol
  • UDP does not make sure that data received is in the same order that it was originally transmitted (less reliable, but faster)
  • UDP is typically faster than TCP
  • UDP has no “flow control”
  • UDP does not check for errors in the data transmission (less reliable, but faster)
  • UDP does not acknowledge segments
  • UDP has error checking but does not have any way to recover from errors it detects

Posted in Computers, Internet and Servers, Operating Systems

Web Hosting at Home – Pros and Cons

Have you ever thought of hosting your own website from home? This is a question many people on the Internet have asked before. While there are many answers to the question I wrote above, I will give my basic opinion on the matter.

Hosting from home is not always easy. You must manage:

  • Your Internet connection (at least to a certain degree)
  • Internet router
  • Server hardware
  • Software (including securing your server OS installation and keeping the software up-to-date)
  • Dealing with hacking attempts
  • Clean-up successful hacking attempts (this rarely happens on a properly setup server)
  • Keep consistent backups of your data (I do three backups daily on my server)
  • Putting in battery backups and power surge protectors
  • Dealing with support requests

My context when saying “hosting from home” is not running a large web-hosting business. I mean hosting your own private/business website as well as a few friends’ websites (including email). Obviously, you cannot be a large “GoDaddy” web-host, while you are running out of your home. That is not a realistic expectation.


The advantages of hosting from home are the following:

1) Privacy of your data

If setup correctly, no one can easily snoop on your private, confidential data that is on your own server.

2) You can choose your own server hardware

You are not limited by a web host’s hardware options for your server.

3) You can easily deal with server hardware failures

You don’t have to wait for a technician to fix your server.

4) Possibly cheaper for you in the long run

A decent Virtual Private Server purchased online could easily run $20.00/month, and a decent dedicated server could be as much as $100.00/month minimum!

5) No commitments to a web hosting company

You do not have to worry about some web hosting company making unreasonable demands (e.g., 50,000 file limit — on some large websites, you could easily go over a limit like this).

The disadvantages of hosting from home are the following:

1) Possible higher Internet service cost

Your ISP may require you to purchase an Internet service business plan instead of a residential one; this can possibly be expensive but not guaranteed to be expensive.

2) You may not have enough upstream bandwidth

To host efficiently from home, you will need to have at least 30 Mbps of upload speed; any less and you will notice performance degradation with loading your websites from other locations.

Please keep in mind if you are wanting to host several large videos, I recommend either lowering the bit-rate of your videos – so they will stream faster for your users – or use a 3rd party hosting service just for the videos themselves. Most self-hosters will not have enough upload bandwidth to properly serve 1GB+ video files.

3) Unstable home power

Few people may not have stable power at their house or business and thus their server goes on and off; this can easily be fixed by putting in a battery backup for your server.

4) Reliability of your Internet Service Provider

Unless you have a signed agreement with your ISP, they are under no obligation to keep your Internet connection up 24/7.

5) More manual labor required

On a hosted solution online, someone else does the hard work with maintaining your server; when you host from home, you must do it all yourself.


I hope the above helps you decide on whether to self-host web services. I know there are a lot of people online who say it is “bad, silly, stupid, not smart, wouldn’t recommend it” when it comes self-hosing web services.

With all due respect to those people, most of them have never done web hosting by themselves before, and are trashing something they have never done (which is silly and bad in itself!).


Posted in Computers, Internet and Servers

Akamai Discovers Linux Botnet that Hits with 150 Gbps DDoS Attacks

According to a web article, Akamai (a Content Delivery Network company) discovered a massive Linux botnet. A botnet is basically a bunch of compromised computers that allow attackers to perform various tasks that would otherwise be virtually impossible to accomplish without everyone’s compromised computers.

Basically, the botnet comes in the form of a Trojan. This Trojan targets Linux systems (including network routers). Once it gets into the system, it proceeds to download software to connect the computer to the botnet. The botnet is reportedly able to give up to a 150 Gbps DDoS** attack.

As I have said on my blog repeatedly, Linux is not immune to security problems. No operating system on the planet is immune to security problems. In this case, it is people using weak, insecure passwords on their Linux boxes.

If I set my Windows box’s Administrator password to ‘password123’ or ‘qwerty’, enabled remote desktop on my computer, and allow remote desktop through the firewall, I would eventually get hacked. Was that Windows’ fault or Microsoft’s fault? No, of course not. It would be my fault for setting a bad password on my computer.

Many people say “Linux is more secure than Windows”, but if you notice – most of the time – they do not give any technical arguments to backup what they said.

For example, a part of one of someone’s comment posted online said (direct quote):
“The primary attack vector to take over these systems is default or weak login passwords, and allowing internet-facing remote root. That has no bearing on Linux suddenly being less secure than it was yesterday, or in any way magically now just as insecure as Windows.”

Notice he said “That has no bearing on Linux suddenly being less secure than it was yesterday, or in any way magically now just as insecure as Windows.”, but he did not give any technical arguments to backup what he said. How is Windows “insecure”? How is Linux I have seen this dozens of times (no kidding).

What is worse is people will listen to them, assuming they are correct (e.g., Linux is more secure than Windows), and go off and repeat the same misinformation around on the Internet without even bothering to check if the information they received is in fact accurate.

Web article link: https://www.engadget.com/2015/09/29/linux-botnet-hits-with-150-gbps-ddos/

** Simply put, a DDoS attack is basically an attack that uses up the victim’s available bandwidth. This causes the victim’s computers to not correctly function when communicating with the outside world and internal network.


Posted in Computers, Internet and Servers, Operating Systems, Software