Computer Security — 1

I am not a security guru and do not play one on TV.

I have been using computers for more than 40 years, Linux for more than 21 years, and at home Linux has been my daily driver since 2009. Other than basic strategies I do not fidget about security, anti-virus, etc. This includes my long gone days of using Windows.

Security is a process. Much like ogres and onions, security is about layers.

The first layer is between the ears. Attitude and awareness.

Possibly the leading reason computers get compromised is PEBKAC — the user. Human nature is predictable in that many people will trade security for convenience. Some make the trade often while some only occasionally. Human nature is predictable to allow various social engineering techniques to succeed.

Another step with computer security is establishing a threat model. Every person is different. Security threat models are about evaluating perceived risks versus desired convenience.

Many exploits require physical access to a system or administrative privileges. By design Linux systems deny administrative privileges until explicitly granted.

All human relationships depend on trust. There are no exceptions. Fortunately, with free/libre software trust levels are high when software is installed from the distro repository system. Once in a blue moon a security issue arises with a package, but in the free/libre software environment those instances are publicly exposed and resolved timely.

Installing software outside the repo system modifies trust levels. The only way to establish reasonable trust with such software is due diligence — research the web.

Keeping a system updated with security patches helps avoid losing sleep.

With respect to external exposure, if a port is not open then short of zero-day exploits, nobody is getting in. Basically, do not run or launch unnecessary services and processes. That is a solid foundation. In the free/libre world, published zero-day exploits are patched timely and should not cause loss of sleep.

Even with some open ports, basic firewall strategies help prevent unrequested egress.

Computer networks have a device commonly called a router. With basic firewall rules that device provides another layer of isolation.

Protecting personal data is a little different. Outside the web browser, and outside of respective configuration files, no free/libre software makes any attempt to scrape data files. Any such software that does that would be exposed publicly. Theoretically file scraping could happen with web browsers but access is limited because by intent and design web browsers run in a sand boxed mode. This isolation can be further improved by limiting the bane of the web to only trusted web sites. Cookies can be configured to not be stored at all or only during a session.

Theoretically, the bane of the web can be used within web browsers to access credentials. The sand boxed design is supposed to prevent that. Commonly any such possible exploit is published and timely patched.

Mail clients such as Thunderbird use a default configuration to impede and inhibit shenanigans such as tracking and beacons.

Access to certain web sites such as banking portals can be resolved by using a different web browser profile. Most if not all banking portals require the bane of the web and cookies. Another layer of security is possible by not allowing the browser to save login credentials.

Password managers can help protect login credentials.

Other files that might contain sensitive information can be encrypted. For example, LibreOffice can encrypt files.

Tools such as apparmor and selinux provide additional layers of protection and isolation. Whether they are necessary is up to each user and the perceived threat model. With basic precautions most home users probably do not need those layers, but some might. Some people might argue that such tools, including anti-virus, is more or less “security theater.”

Posted: Category: Tutorial Tagged: General

Next: Computer Security — 2

Previous: Living with a Smart Phone