To me, Sorcery from Sourcemage GNU+Linux it’s pretty flexible and and really appealing. As Gentoo, Sourcemage is a source based GNU+Linux distribution, but it was my preferred one for a long time, just that I couldn’t keep up with having to build everything, :), but it’s also way cool to cast spells, rather than install packages, :)
The thing with Wayland, is that if you don’t use gnome/kde/tiling-compositors, your options are pretty limited, actually the closest to an independent stacking compositor would be wayfire.
You can make things work, but with a lot of nuances here and there, to the point you get tired of dealing with them… gtk4 not paying attention to GDK_DPI_SCALE, no file manager recognizing gvs except for nautilus, which guess what, it’s already gt4, electron apps built on top of latest (22) electron and gtk3, not paying attention to GDK_DPI_SCALE either, wf-shell doesn’t offer tray, and waybar does, but waybar doesn’t offer a launcher button, but wf-shell does, and so on…
I actually recently got around a week trying to make myself comfortable with wayland, but for me it lags behind from Xorg.
But perhaps the author is right, and if Xorg would have been dropped altogether, wayland would be in better shape, but that we won’t ever know… On the other hand who knows if someone would have forked it, or revive it after noticing too many users complaining. It’s been many years since its introduction, and wayland is still not up to being a workable replacement for Xorg for a good amount Xorg users (usually not considered in the bulk numbers though). Every now and then I try it, when reminding myself of this “the register” post, but haven’t gotten to the point of really wanting to migrate…
Hopefully things will change, but till then, I really hope Xorg keeps it up.
In general, I don’t like the idea of having flatpak, snapcraft and appimage, packages. First, and this is different between them, but in the end they all suffer the same one way or another, there are huge binary dependencies blobs, whether coming with the same app, or having to be installed from the package provider. At some point I tried to install Liri from Flatpak, and it was a nightmare of things having to be installed, when I already had most things natively built by the distro I used.
As opposed to opinions from Linus himself, I prefer SW to be built from the same system libraries and dependencies, rather than each app coming along with their own set of binary dependencies. Getting GNU/Linux to behave like MS-Win, where you can install whatever binary, from whatever source, and perhaps even duplicating a bunch of stuff you already have on your system is crazy. One thing that gets solves easier, when having contained apps, is when depending on same things but from different versions, and that to me is not ideal either. To me, as done by distros, one should avoid as much as possible having different versions of same SW, but if in need, then rename one including the version as part of the name, or something, but mainly avoid having a bunch the same thing with multiple versions all over. Guix handles that more elegantly of course, but I haven’t had the time, neither the guts to go for Guix yet (still in my ist of pending stuff).
The other thing, is that although now a days everything comes with a signature to check, on distros provided packages, which are built from source, besides minimizing the amount of stuff needed, one can always look at how the packages are built (arch and derivatives through the PKGBUILs and companions), tweak and build oneself (eg, currently fluxbox head of master, and from a while back, doesn’t play nice with lxqt, then with the help of fluxbox devs I found the culprit commit, revert it, and still apply the same distro recipe with my own patch, and moved on). No matter being signed, binary packages are not as flexible, that besides the fact several just proprietary, and one might not even be aware, since the move is to be more MS-Win like, even with auto updates and such…
Building having in mind minimal systems and ecosystems, and have mostly free/libre or at least open source SW, makes thing much better for me. One can end up with bloated huge systems, if wanted, but at least not with bunch of duplicates unnecessarily.
$1500 the laptop: tomshardwate reference. Well, it’s been a while I don’t buy any laptop for personal use, but I’m wondering if there are $500 laptops, which are worth acquiring.
That said, please remember risc-v doesn’t mean open source CPU, it just means open source ISA. Actually there are many vendors now offering risc-v IP CPUs and others, such as Cadence, Siemens (in the past Mentor), and others, but those are not open source. And even if the CPU was open source, there are other components which might not be. And there’s the thing about firmware binaries requirements…
Looking for a fully open source, both HW and SW, without binary blobs also, it’s sort of hard this days. Hopefully that’s not way farther away…
If looking for risc-v though, Roma it is though, since there’s nothing close to it yet, :)
I’ve been looking for a p2p alternative, which would allow a simple workflow. So I had some hope when noticing radicle. But it builds on top of the blockchain hype, I’m afraid. This cryptopedia post shows things I really don’t like.
It’s true git
itself is sort of distributed, but trying to develop a workflow on top of pure git
is not as easy. Email ones have been worked on, but not everyone is comfortable with them.
A p2p using openDHT would have been my preferred approach. But any ways, I thought radicle could be it. But so far I don’t like what I’m reading, even less with whom they are partnering:
Radicle has already partnered with numerous projects that share its vision via its network-promoting Seeders Program (a Radicle fund), including: Aave, Uniswap, Synthetix, The Graph, Gitcoin, and the Web3 Foundation. The Radicle crypto roadmap includes plans to implement decentralized finance (DeFi) tools and offer support for non-fungible tokens (NFTs). With over a thousand Radicle coding projects completed, this RAD crypto platform has shown that it’s a viable P2P code collaboration platform, one that has the ability to integrate with blockchain-based protocols.
Perhaps I’m just too biased. But if there’s another p2p, hopefully free/libre SW, and non blockchain, then I’d be pretty interested on it…
well, it seems soucehut will have a web based work flow, or so it seems from this postmarketos post:
We talked to Drew DeVault (the main developer of SourceHut) and he told us that having the whole review process in the web UI available is one of the top priorities for SourceHut
…
SourceHut is prioritising to implement an entirely web-based flow for contributors.
This things don’t happen in one day, so don’t hold your breath yet, but it seems it’s coming at some point…
I believe there’s a lot of misunderstanding of what’s freeSW, what’s openSW, and what debian repos have been providing all along.
Debian has been providing a “non-free” repo for all versions they keep in their repo servers (experimental, unstable, testing, stable) since I can remember.
And to me it’s important to make a difference of what’s freeSW vs. what’s not freeSW, and I prefer to use freeSW, unless I’m forced to use something it’s not freeSW and there’s no way to overcome that.
This is one of the things openSW movements (remember, IBM, MS, Google, and several other corps all are part of, or contribute to openSW fundations, but never had supported the idea of freeSW) have influenced to, and convinced most into. Now the value of freeSW means almost nothing, and most are just happy with openSW. I can’t judge anyone, but just say, this is really sad. And once again I see people treating those defending principles as 2nd class citizens, :(
Well sourcehut can be self hosted as well (ain’t it OS anyways?):
https://sr.ht/~sircmpwn/sourcehut https://man.sr.ht/installation.md
That said, sourcehut has privacy features, and libre oriented features gitlab doesn’t. But I understand, as of now, without webUI, as it is, it’s pretty hard to adopt sourcehut, and even when it finally does, having invested on gitlab (and even majority on github), which implies time and resources, might not be an easy thing to try sourcehut any ways.
The the central webUI would be key for major players adoption, and more time as well. It’s been not long ago that debian, xorg, and arch (still in progress), migrated to gitlab, for example. Those migrations are expensive in people resources, and time.
And for regular individuals adoption, besides enabling the webUI, it might be way harder, unless someone contributes to sr.ht resources to allow hosting projects, with no CI support, but for free. It’s hard to get individuals adoption at some cost, even if that’s really a low cost, when there are alternatives, which BTW violate SW licenses, for free, :(
Better? :)
See it all depends, as @Jeffrey@lemmy.ml mentioned, out of the box you can start easily mounting remote stuff in a secure way. Depending on the latency between the remote location and you, SSHFS might become more resilient than NFS, though in general might be slower (data goes encrypted and encapsulated by default), but still within the same local LAN (not as remote as mounting something from Texas into Panamá for example), I’m more than OK with SSHFS. Cifs or smbfs is something I prefer avoiding unless there’s no option, you need a samba server exposing a “shared” area, and it requires MS-NT configurations for it to work, and managing access control and users is, well, NTish, so to me it’s way simpler to access remote FS through SSH on the remote device I already have SSH access to, and it boils down to NFS vs. SSHFS, and I consider easier, faster and more secure, the SSHFS way.
But “better”, apart from somehow subjective, depends on your taste as well.
FYI: kmail does support office365 + exchange, the thing about the kontact suite is its akonadi DB dependency and all kde deps required. It’s like anything kde you install, brings a bunch of other stuff, usually not anything you end up using…
However I do like how kmail integrates with local gnuPG, rather than Thunderbird’s librnp, which I end up replacing with Sequoia Octopus librnp…
I miss read the article’s title, and yes I didn’t see more signs of a privacy discussion within, though this conclusion:
DRM’s purpose is to give content providers control over software and hardware providers, and it is satisfying that purpose well.
Is precisely one of the things I dislike from DRM… At any rate, my bad with the title…
We don’t have to agree with his criteria, do we? Starting from the fact the most DRM implementation is not open source. Besides, in order to control what you use, it’s implied DRM has access to see what you get, when you get it, where you use it, and so on,. That’s by definition a privacy issue, they can get stats on what you consume, how often you use it, where, on which devices and so on.
But the main issue with DRM, I’d agree, is not privacy itself, it’s an ethical one. And DRM hadn’t prevented piracy ever. It’s main issue is controlling and limiting your use of what you acquire/buy, and disallowing sharing, sometimes even with yourself, disallowing unauthorized devices, or disallowing to see content you should have access to, without having an internet connection to the corp watching and controlling how you use such content or whatever it is protected under DRM.
Of course, the blog comes from someone working on a big corp. At any rate. I guess not all open source supporting people actually supports the FSF, on that DRM is unethical. It so happens I do…
https://www.fsf.org/campaigns/drm.html https://www.defectivebydesign.org https://www.defectivebydesign.org/what_is_drm https://www.fsf.org/bulletin/2016/spring/we-need-to-fight-for-strong-encryption-and-stop-drm-in-web-standards
ohh, there’s a tweet, however I’ll have to see if it’ll allow using openkeychain, instead of TB’s own librnp, which I really dislike on the desktop, and use sequoia octopus librnp (on top of gnupg) instead.
I really don’t like TB’s way to keep and maintain keys (I use the gnupg “external” key for my private key, but still TB’s librnp wants to have it stored in its own DB for no reason, otherwise can’t do a thing). And the same that applies to FF applies to TB, they shouldn’t attempt to keep passwords and keys themselves, better use gnupg, and for passwords something like qtpass on the desktop, and for android, there’s openkeychain and others… And they have watched how it’s possible to do something like the sequoia team does, but I guess they like what they chose to do, :( Using sequoia octopus librnp on mobiles might be rather complicated (it’s somehow tedious to use it on distros not officially supporting it, since TB’s changes lately tend to break octopus, and besides one needs to keep replacing the library on every TB’s upgrade)…
But for those using big corps email providers, then yes, TB on Android is good news. In general it is good to have TB on mobile as well, I just hope they would provide more options to users., extensions for gnupg are all banned (admittedly enigmail was mangling too much into the TB’s code), and they don’t like autocrypt either, so no options…
I prefer k9, but that’s a matter of taste. Out of the gmail affair, well, I really never saw much difference (agreed fairmail is more “standard” in the way it treats directories, but once you get used to k9, you see the benefits on its own ways).
On the gmail affair, well, the route fairmail chose to do the oauth2 authentication for gmail (k9 doesn’t) is through having a google account on your phone, so even if there’s benefit over, say the gmail app, it’s terrible, even if you use LOS4microG or similar. I no longer have a google account, since like 3 years ago, and I recommend de-googling, but I understand it’s hard for many, particularly if using work google accounts, :(
I wouldn’t go that far as the “guy with a computer”, for example in the case of gnu+linux phones, and gnu+linux non desktop (mobile). There are several apps needing to be adapted to the mobile form factor and touch screen, and plain lack of apps as well, that mos probably flatpak or similar are required on gnu+linux phones.
But I truly dislike the bunch of stuff one gets installed, when installing any flatpak or similar package, both, other flatpak packages, plus what each package itself carries within. It’s way bloated in my humble opinion.
As an Artix/Arch user, I prefer AUR packages, and I even prefer AUR requiring building, over the binary (*-bin) ones, so that I get system libraries being linked, and the right built packages. And only if there’s no option (mainly unavoidable proprietary/closed apps) then I go with AUR binary packages. But if only available through flatpak or similar, then I prefer looking for an alternative SW/app.
Besides the bloated way of that kind of packaging, there’s some sort of centralization. Although there are different flatpak repos, for example, there’s sort of a “central” one, where you find most of the stuff, and then you go to other repos offering some things you don’t get on the “central” one, and if you don’t like their packaging policies, then as usual, go have your own and package yourself, I’d guess… Distros, having their different policies (not just packaging ones):
And the divergent policies adopted by different distros grow beyond those ones. Such variety can’t be compressed into a single solution fits all. And if you’ve heard pretty influential and vocal open source guys challenging that diversity, and particularly the ability to building the SW differently, I suggest double thinking what that really means, and how that would affect different users, having divergent policies affinities. I, for example, would prefer to use gnu+guix, but given the current hw assigned to me at the office, I can’t be as free/libre SW focuses as I’d like, but I still try preferring as much free/libre SW as possible (slack and zoom are necessary evils in the office for example)…
So this reach diversity, in my mind is good and needed, and having several people building stuff is also good, since having just a few, with unique and common policies, might become dangerous…
Just another opinion, not exactly the same as the one from the “guy with a computer”, but my opinion aligns somehow with theirs.
This is totally new to me, and to be honest, habits are hard to break, so I’ll try to use singular “they” when not sure. Basic/cheap English grammar lessons for non native English speakers never teach you anything about singular “they”. How sad, cause then one who tries to write/speak as best as possible, might not be correct because of how they got to learn a bit of English.
I know this will sound as blasphemy, but what I’ve read from Skakespeare, I’ve did in Castellano, hehe, so I wouldn’t notice what you mentioned. Good to know.
Well, to me, the OSS part of FLOSS is actually sad. To me it’s better than totally proprietary and closed SW, but not enough. And the sad part is that they go as far as trying to totally ban RMS, and totally remove the importance of free/libre SW, licensing and values, and I guess it’s related to how corporative the OSS part of FLOSS has become.
Not supporting everything in the OP, I see no issue with mozilla trying to get funded so it can keep developing FF for example, though I don’t agree with its corp like business model, with CEOs abruptly making way more than devs, and closing their research department, but I do agree on trying to fund development. That said, I’d preferred FF to use GPL3+ licence than the mozilla license…
But this is my opinion, and no one nas to agree with me. To me, we have fallen into the attractiveness of OSS, and leaving behind the FL component. Disregarding the free/libre philosophy or values, and just focusing on quality, and of course the business side of things, in my mind, allows big corporations to be now the ones upstreaming OSS, or at least be the big ones doing so, but not FLOSS, and controlling what we use and how, just less dangerous than if it were done with proprietary SW…
I don’t enjoy a static IP, the IP I get assigned is dynamic. Not sure if that helps. BTW, I went there, and what shows there, is nothing anyone using my internet service has watched. Actually nothing I’ve downloaded shows up there, :) Still, I’m not confident, that’s why I’m asking in the st place, :) But if that shows what I’ve downloaded, there’s nothing in there i have, and at contrary, none of what I’ve downloaded is there, :)
I’ve used VPN to work remotely for more than 20 years, and different sort of VPNs, unfortunately closed sourced privative ones. VPNs give way too much knowledge and control to the VPN provider or service owner, over the user. They do know who the user is, and they can also block whatever they want. VPNs make a lot of sense for remote work, but I’ve always been hesitant about personal VPN use. If I can get away without VPNs, that would be my preference.
That said, if there’s no other way, then again, what is the recommendation for best privacy and security possible? I’ve read good comments about NordVPN, and also about mullvad, and if I recall correctly, in some past post @dessalines@lemmy.ml suggested he was using mullvad. Perhaps a plus would be if their servers are all placed out of the 14 or more eyes. But searching on the internet, I guess it’s hard to tell, there’s always hot debate on them. Perhaps knowing I’m all for more privacy/security helps providing advice to me. VPN, as mentioned, is not something I’m happy to use. Also, I’d prefer, and even if it sounds like a contradiction, to use and pay for a VPN from a non profit organization, privacy oriented, perhaps that helps a little bit on the trust issue…
Actually, it seems privacyguides suggest mullvad, although not directly said… Now, payment is another privacy issue, and cash (I don’t posses any cryptos, and if I were to, that’d be monero, which is not supported unfortunately), and trusting the post service with actual money is sort of a bad idea…
If only there was a way to set all connections encrypted, and encapsulating the torrenting…
I do use common trackers. If somehow I can keep using them, just slower because of using a more secure layer, then that wouldn’t be an issue, but what I understand is that I wouldn’t be able to use any tracker, right, just some special ones supporting i2p?
On the other hand, I read I would need a specialized i2p torrent client as well, right? I don’t want to get away from rtorrent, to me it’s the best, I run it on a detached screen (gnu screen), and I have set it up to automate some things as well, and having to just ssh to the host and attach, on such ssh console, to its screen process is just priceless, no need of guis, web, etc…
So, if I can just set the host to use i2p somehow, and tunnel or route torrents traffic though it, still using any client (in my case rtorrent), and still using any trackers, then let me know how to, :) So far, it seems that’s not possible.
Also, if there’s a way to enforce encryption, like the pseudo encryption I mentioned, but better, like using specialized universal/proxy trackers that allow finding seeders supporting pseudo encrypted connections to happen at the very start, that would be helpful as well.
Not discarding i2p at once, perhaps I misunderstood things…
Well, FF + Arkenfox, which is well regarded, doesn’t get rid of FF binary blobs. Librewolf does, and for the things the author complains about on Librewolf, can be tweaked, as they can be on FF. Being in favor of tweaking FF and discarding Librewolf sounds wrong to me. Librewolf reduces quite a bit the amount of tweaking, leaving you with a saner default than FF, its tweaks are based on Arkenfox, and you can still modify Librewolf as you wish. So I don’t agree with just discarding Librewolf, or any other FF derivative browser. Granted they all depend no FF’s success to keep on going, but they do have an impact on what the user is left out to tweak, and besides, they might, or might not (Librewolf does) remove binary blobs, which in the end you never know what they might do…
a bit outdated, perhaps partial, :) https://sourcemage.org/History
As I said, way cool the codex, the grimoire, the spells, the casting, the dispelling, scribe, resurrect, summon, and so on. And as well as with other source based distros, with the advantages on x86 to really get binaries for the CPU in the HW… I moved to Arch when realizing it was not possible for me to keep up with all the building, and I was strict about rebuilding the whole system when there were gcc upgrades for example (but it was cool as well to experience how simple it was with sourcemage to rebuild everything, keeping in mind dependencies and all that), and then to Artix when, ohh well, doesn’t matter any more… Next, probably Guix, but one can easily get to miss the AUR, and easy of creating a self custom repo. With sourcemage and arch/artix creating our own custom packages, and our own custom repos is really easy and fast, which is hard to compete against, and then one can really get addict to the AUR, :)