Website and blog: https://seirdy.one

Full bio

Gemini: gemini://seirdy.one

Main fedi: @seirdy@pleroma.envs.net

PGP: see website

  • 2 Posts
  • 18 Comments
Joined 1Y ago
cake
Cake day: Jan 28, 2021

help-circle
rss

If you’re asking for an open-source option because you want to self-host…well, at that point, you’d already have a web server. Just sftp/rsync the files over into a subdir of your web root.


The safety of TUI browsers is a bit overrated; most don’t do any sandboxing of content whatsoever and are run in an unsandboxed environment. Both of these are important for a piece of software that only exists to parse untrusted content from the most hostile environment known (the Web).

Check a CVE database mirror for your favorite TUI browser; if it has a nontrivial number of users, it’ll have some vulns to its name. Especially noteworthy is Elinks, which I absolutely don’t recommend using.

Personally: to read webpage from the terminal, I pipe curl or rdrview output into w3m that’s sandboxed using bubblewrap (bwrap(1)); I wrote this script to simplify it. I use that script to preview HTML emails as well. The sandboxed w3m is forbidden from performing a variety of tasks, including connecting to the network; curl handles that.

Tangential: rdrview is a CLI tool that implements Mozilla’s Readability algorithm. It uses libseccomp for sandboxing on Linux and Pledge to do so on OpenBSD. Piping its HTML output into w3m-sandbox makes for a great text-extraction workflow.


The problem is that your offline CA stores won’t use OCSP revocation logs or certificate transparency. You need live updates for those. The latter is especially important, as without it you’re completely dependent on one group of CAs.


Servers use Linux

The server, desktop, and mobile computing models are all quite different. The traditional desktop model involves giving programs the same user privileges and giving them free reign over all a user’s data; the server model splits programs into different unprivileged users isolated from each other, with one admin account configuring everything; the mobile model gives programs private storage and ensures that programs can’t read each others’ data and need permission to read shared storage. Each has unique tradeoffs.

macOS has been adopting safeguards to sandbox programs with fewer privileges than what’s available to a user account; Windows has been lagging behind but has made some progress (I’m less familiar with the Windows side of this). On Linux, all modern user-friendly attempts to bring sandboxing to the desktop (Flatpak and Snap are what I’m thinking of) allow programs opt into sandboxing. The OS doesn’t force all programs to run with minimum privileges by default having users control escalating user-level privileges; if you chmod +x a file, it gets all user-level privileges by default. Windows is…somewhat similar in this regard, I admit. But Windows’ sandboxing options–UXP and the Windows Sandbox–are more airtight than Flatpak (I’m more familiar with Flatpak than Snap, as I have some unrelated fundamental disagreements with Snap’s design).

I think Flatpak has the potential to improve a lot: it can make existing permissions enabled at run-time so that filesystem-wide sandboxing only gets enabled when a program tries to bypass a portal (most of the “filesystem=*” apps can work well without it, and some only need it for certain tasks), and the current seccomp filter can be made a “privileged execution” permission with the default filters offering fine-grained ioctl filtering and toggleable W^X + W!->X enforcement. The versions of JavaScriptCore, GJS, Electron, Java, and LuaJIT used by runtimes and apps can be patched to run in JITless mode unless e.g. an envvar for “privileged execution” is detected. I’ve voiced some of these suggestions to the devs before.

My favorite (and current) distro is Fedora. If Flatpak makes these improvements, Fedora lands FS-verity in Fedora 37, Fedora lands dm-verity in Silverblue/Kinoite, and we get some implementation of verified boot that actually lets users control the signing key: I personally wouldn’t consider Fedora “insecure” anymore. Though I’d still find it to be a bit problematic because of Systemd. I wasn’t convinced by Madaidan’s brief criticisms of Systemd; I prefer this series of posts that outlines issues in Systemd’s design and shows how past exploits could have been proactively (instead of reactively) avoided:

Systemd exposes nice functionality and I genuinely enjoy using it, but its underlying architecture doesn’t provide a lot of protections against itself. The reason I bring it up when distros like Alpine and Gentoo exist is that the distro I currently think best combines the traditional desktop model with some hardening–Fedora Silverblue/Kinoite–uses it.

QubesOS is based on Linux

QubesOS is based on Linux, but it isn’t in the same category as a traditional desktop Linux distribution. Like Android and ChromeOS, it significantly alters the desktop model by compartmentalizing everything into Xen hypervisors. I brought it up to show how it’s possible to “make Linux secure” but in doing so you’d deviate heavily from a standard distribution. Although Qubes is based on Linux, its devs feel more comfortable calling it a “Xen distribution” to highlight its differences from other Linux distributions.

Here’s an exhaustive list of the proprietary software on my machine:

This is a defeatist attitude and meaningless excuse.

I only brought this up in response to the bad-faith argument you previously made:

I think you have gotten influenced by madaidan’s grift because you use a lot of closed source tools and want to justify it to yourself as safe.

I don’t use any closed-sourced tools on my personal machine beyone hardware support, emulated games, and webapps I have to run for online classes. Since you seem to be arguing in bad faith, I don’t think I’ll engage further. Best of luck.


He is a security grifter that recommends Windows and MacOS over Linux for some twisted security purposes.

Windows Enterprise and macOS are ahead of Linux’s exploit mitigations. Madaidan wasn’t claiming that Windows and macOS are the right OSes for you, or that Linux is too insecure for it to be a good fit for your threat model; he was only claiming that Windows and macOS have stronger defenses available.

QubesOS would definitely give Windows and macOS a run for their money, if you use it correctly. Ultimately, Fuchsia is probably going to eat their lunch security-wise; its capabilities system is incredibly well done and its controls over dynamic code execution put it even ahead of Android. I’d be interested in seeing Zircon- or Fuchsia-based distros in the future.

When it comes to privacy: I fully agree that the default settings of Windows, macOS, Chrome, and others are really bad. And I don’t think “but it’s configurable” excuses them: https://pleroma.envs.net/notice/AB6w0HTyU9KiUX7dsu

I think you have gotten influenced by madaidan’s grift because you use a lot of closed source tools and want to justify it to yourself as safe.

Here’s an exhaustive list of the proprietary software on my machine:

  • Microcode
  • Intel subsystems for my processor (ME, AMT is disabled. My next CPU hopefully won’t be x86_64 because the research I did on ME and AMD Secure Technology gave me nightmares).
  • Non-executable firmware
  • Patent-encumbered media codecs with open-source implementations (AVC/H.264, HEVC/H.265). This should be FLOSS but algorithms are patented; commercial use and distribution can be subject to royalties.
  • Web apps I’m required to use and would rather avoid (e.g. the web version of Zoom for school).
  • Some Nintendo 3DS games I play in a FLOSS emulator (Citra). Sandboxed, ofc.

That’s it. I don’t even have proprietary drivers. I’m strongly against proprietary software on ideological grounds. If you want to know more about my setup, I’ve made my dotfiles available.


And… you cannot study the closed source software.

Sure you can. I went over several example.

I freely admit that this leaves you dependent on a vendor for fixes, and that certain vendors like oracle can be horrible to work with. My previous articles on FLOSS being an important mitigation against user domestication are relevant here.

Can you, with complete certainty, confidently assert the closed source software is more secure? How is it secure? Is it also a piece of software not invading your privacy? Security is not the origin of privacy, and security is not merely regarding its own resilience as standalone code to resist break-in attempts. This whole thing is not just a simple two way relation, but more like a magnetic field generated by a magnet itself. I am sure you understand that.

I can’t confidently assert anything with complete certainty regardless of source model, and you shouldn’t trust anyone who says they can.

I can somewhat confidently say that, for instance, Google Chrome (Google’s proprietary browser based on the open-source Chromium) is more secure than most Webkit2GTK browsers. The vast majority of Webkit2gtk-based browsers don’t even fully enable enable sandboxing (webkit_web_context_set_sandbox_enabled).

To determine if a piece of software invades privacy, see if it phones home. Use something like Wireshark to inspect what it sends. Web browsers make it easy to save key logs to decrypt packets. Don’t stop there; there are other techniques I mentioned to work out the edge cases.

Certain forms of security are necessary for certain levels of privacy. Other forms of security are less relevant for certain levels of privacy, depending on your threat model. There’s a bit of a venn-diagram effect going on here.

FLOSS being less secure when analysed with whitebox methods assures where it stands on security.

Sure, but don’t stop at whitebox methods. You should use black-box methods too. I outlined why in the article and used a Linux vuln as a prototypical example.

This will always be untrue for closed source software, therefore the assertation that closed source software is more secure, is itself uncertain.

You’re making a lot of blanket, absolute statements. Closed-source software can be analyzed, and I described how to do it. This is more true for closed-source software that documents its architecture; such documentation can then be tested.

Moreover, FOSS devs are idealistic and generally have good moral inclinations towards the community and in the wild there are hardly observations that tell FOSS devs have been out there maliciously sitting with honeypots and mousetraps. This has long been untrue for closed source devs, where only a handful examples exist where closed source software devs have been against end user exploitation. (Some common examples in Android I see are Rikka Apps (AppOps), Glasswire, MiXplorer, Wavelet, many XDA apps, Bouncer, Nova Launcher, SD Maid, emulators vetted at r/emulation.)

I am in full agreement with this paragraph. There is a mind-numbing amount of proprietary shitware out there. That’s why, even if I was only interested in security, I wouldn’t consider running proprietary software that hasn’t been researched.


Yep. Foot is Wayland-only

I should add that Alacritty running with X11 compatibility isn’t quite as fast as running it on Wayland. Both Alacritty and Foot can utilize Wayland’s excellent frame timing/vsync support to prioritize rendering only when the display refreshes. Doing so reduces load (esp. in Alacritty’s case since it can offload most work to the GPU), which is sorely needed because proper font rendering is an intensive process to do in a latency-sensitive manner.


I am tired of people acting like blackbox analysis is same as whitebox analysis.

I was very explicit that the two types of analysis are not the same. I repeatedly explained the merits of source code, and the limitations of black-box analysis. I also devoted an entire section to make an example of Intel ME because it showed both the strengths and the limitations of dynamic analysis and binary analysis.

My point was only that people can study proprietary software, and vulnerability discovery (beyond low-hanging fruit typically caught by e.g. static code analysis) is slanted towards black-box approaches. We should conclude that software is secure through study, not by checking the source model.

Lots of FLOSS is less secure than proprietary counterparts. The difference is that proprietary counterparts make us entirely dependent on the vendor for most things, including security. I wrote two articles exploring that issue, both of which I linked near the top. I think you might like them ;).


You’re not the first person to ask, which is why I updated the post to expand the acronym in the first sentence. Diff.


You make a lot of good points here, many of which I actually agree with.

The article focused on studying the behavior and properties of software. For completeness, it mentioned how patching can be crowdsourced with the example of Calibre. I also described how FLOSS decreases dependence on a vendor, and wrote two prior posts about this linked at the top.

I never claimed that source code is useless, only that we shouldn’t assume the worst if it isn’t provided.


@X_Cli@lemmy.ml I updated the post to add a bit to one of the counter args, with a link to your comment. Here’s a diff


Linters are a great thing I should’ve mentioned, esp. ones like ShellCheck. The phrase “low-hanging fruit” has been doing a lot of heavy lifting. I should mention that.

I talked a lot about how to determine if software is insecure, but didn’t spend enough time describing how to tell if software is secure. The latter typically involves understanding software architecture, which can be done by documenting it and having reverse engineers/pentesters verify those docs’ claims.

It’s getting late (UTC-0800) so I think I’ll edit the article tomorrow morning. Thanks for the feedback.


I find people who agree with me for the wrong reasons to be more problematic than people who simply disagree with me. After writing a lot about why free software is important, I needed to clarify that there are good and bad reasons for supporting it. …

fedilink

Advanced font fallback is one of the defining features of the Foot terminal, if you’re interested. You can even specify different fonts, which is useful for e.g. getting emojis to fit in one cell.




Given the attack surface of addons, I’ve downsized my addon usage.

  • I’ve replaced HTTPS-Everywhere with the built-in HTTPS-first/only modes in FF and Chromium.

  • In FF, I use userContent.css instead of Stylus.

  • I use uBlock Origin’s url-rewriting filters in place of redirection addons.

  • In Chromium, you can choose to have an addon only be enabled on certain sites. I do this with Stylus and Dark Background Light Text.

EDIT: more information:

  • I have a shell script that uses regex to “clean” urls in the clipboard and remove tracking params instead of the CleanURLs addon, since this is most useful when sharing links with others. I’ve gotten in the habit of previewing URL content before navigation (e.g. with a mouseover or by pasting into the URL bar) as well. If I want to navigate to a messy url, I just copy it and enter a keybind to clean the copied URL.

I use multiple browsers and profiles.

  • Normal browsers: Firefox with Cookie Autodelete, uBO, Stylus, Dark Background and Light Text; Chromium with uBO and Stylus. Stylus is only selective enabled.

  • For security-sensitive non-anonymous stuff, I run Chromium with flags to disable JIT and to disable JS by default, in a bubblewrap sandbox. This browser profile has no addons.

  • For peak anonymity (e.g. when using one of my anon alts), I run the Tor Browser in a Whonix VM. For quick anonymity I just use the regular Tor Browser Bundle in a bubblewrap sandbox. In an act of mercy towards my weak 2013 Haswell laptop’s battery, I no longer run Qubes. The Tor Browser should not ever be used with custom addons if you want anonymity.

Because the Tor browser should never run with addons and because I use a browser profile that has none, I don’t want addons to be a “crutch” that I depend on too much.

I do global hostname-blocking at the DNS level, so I can live without an adblocker. DNS blocking doesn’t do fine-grained subpage-blocking, conditional blocks, cosmetic filtering, redirects, etc. so a more complete solution is still worthwhile.

I also try to avoid injecting content into webpages with JS enabled, since that is extremely fingerprintable and opens a can of (in)security worms.

Some addons that I do not recommend at all:

  • Canvas Fingerprinting Defender: injects JS into pages, which is very fingerprintable and can trigger a CSP report if you don’t disable those. CSP reports can identify you even if you disable JS execution.

  • Anything that you can do without an addon, TBH. They do weaken the browser security model.



A recent article on Corporate Memphis: Why does every advert look the same? Corporate Memphis.

Its popularity is the result of a feedback loop: it’s popular because it’s popular. It also makes people feel safe and comfortable (a form of brain-hacking, if you will).

Honestly, I wouldn’t mind it too much if it wasn’t so overused. Now I immediately feel distrustful the second I see it. It makes me assume that I’m looking at a page made by an advertiser rather than something honest. Product information shouldn’t try to make me feel something, it should tell me why I should and shouldn’t use something.


I agree that the PR process is bureaucratic, but that’s not the workflow that Git was made for. It’s a workflow popularized by GitHub.

The workflow that Git was made for was “make commits” + “export patches” + “send patches”. This typically happens over a mailing list. Under this workflow, sending a contribution is a two-step process: git commit and git send-email. The recipient could be a mailing list, or it could just be the developer’s email address you grabbed from a commit message. That’s part of the reason why Git has you include your email in every commit.