While I probably wouldn’t use a propitiatory client I have no real issues with it. You can always switch off if you feel that it isn’t respecting you, even if it is harder to verify what it is doing under the hood.
Basically having a federated and open protocol greatly mitigates the downsides of a proprietary client.
So you own your own blog and content. You can host your blog in a Wordpress site, GitHub pages, Ghost, or wherever you want.
Sounds like it just pulls feeds from anywhere. Nothing specific about GitHub at all.
This is like saying that Google is a search engine for GitHub blogs 🤷 I mean it is, but it is also a much more general tool.
According to https://github.com/LemmyNet/lemmy/issues/1395 Lemmy sends WebMentions.
Regarding the edit:
what I should have said was providing a product in the same market. So the fact that’s its free might not be relevant and a free, instance of a decentralized social network could be considered in the same market as a commercial, centralized social network.
Again, I don’t think the exact product matters too much. Being in different markets can help, for example a Twitter Whistle would have a better argument than a Twitter Social Network. But for huge brands that are well known everywhere (like Twitter) the market difference tends to matter less.
I still don’t see twitter going after this single instance
Yes, it is unlikely. But definitely not impossible with things that Twitter has already done to try and push out Mastodon (like banning links). And if it does happen it will be devastating to the instance as changing domains is painful. So I’m just suggesting that it may be best to play it safe here to avoid possible problems down the road.
I am not a lawyer. But…
I don’t think selling matters at all. The problem is if someone could confuse it. “Is Twitter Down” is arguably very hard to confuse as it is clearly something about Twitter (telling if it is down), and it doesn’t seem to present itself as being made/run by Twitter. This is further supported by the “by ryan king” in the corner.
However if people often talk about #BlackTwitter as some subset of content on Twitter it seems entirely possible that people could think that blacktwitter.io could think that it is run by Twitter. If it was called something like thenewblacktwitter or blacktwitteralternative it would be less likely to cause this confusion.
Another angle that they may argue is that they have an official product called “Twitter Blue” and they could argue that people would believe that “Black Twitter” is also an official Twitter product.
The most important think to remember about Trademark law is that it is very much about consumer protection. It doesn’t give you exclusive rights to use your trademark, it just prevents people from using your trademark to make something seem to be from you.
On top of all of that being legally right isn’t the only thing that matters. If Twitter accuses you of Trademark infringement unless you want to hire a bunch of lawyers you are probably just going to do what they say.
WordPress is always a good option. They have a freeium hosted option at wordpress.com or you can run your own. You can also always move to another provider or host it yourself.
Tumblr is also big and popular and there are tons of smaller hosts like https://bearblog.dev/.
Honestly I think a fresh coat of paint is what K9 needs most. The recent swipe gestures both to navigate between messages and in the message list have been fantastic. But really working thorough the UX on component at a time will be a dramatic improvement to K9.
For example the folder classes UI is both too complicated to do simple things and impossible to do more complicated things. The compose window is OK but can use a cleanup. The search UX is pretty awkward (and buggy). I’m glad to see the message window improve as well. The fact that there is currently no way to see both the name and address of the sender is very annoying. I need to pop up the “Show Headers” option way too often. I’d also really appreciate more powerful options for remote content in messages. The current On/Contacts Only/Off is too simple for my taste.
I think this mockup shows understanding of the current design and what features are valuable and missing. Note that the mockup also has very long subjects and similar so this is the worst-case space usage. I’m sure it will also be refined a bit more before being shipped.
New things are always scary and carry some risk, but I’m personally quite optimistic.
I don’t use bookmarks often but I really use them just like a prioritized browser history. If I know that I might want to visit a page again I bookmark it, maybe add some keywords, then pull it up by typing in the URL bar. The point of the bookmark is mostly to ensure that is is synced to all devices and ranks with a high priority. However another benefit is for websites with hard-to-understand URLs the bookmark icon can indicate that this is the one that I want.
I don’t know if I see that as a technicality. I see that as an important aspect of how abolishing copyright would work. I’m curious how this would be managed, is there a new law that all non-personal information is to be made public and freely available?
To me abolishing copyright and making all information public are very different things although obviously have some similarities.
Note that it isn’t the algorithm that is copyrighted. Algorithms are not copyrightable IIUC. It is the way the code is written that is “art” and copywritable. If this code was actually re-written using the same algorithm it would be fine. Much like you can own a recipe text but not the actual ingredients and steps of a recipe itself.
Of course you can still disagree. But I think that software is a creative endeavor and I think it is beneficial to provide some control to the author.
I do agree that software patents are generally harmful. There would maybe be some value to encouraging development and sharing of algorithms or techniques but I think the time frame would need to be much shorter (5y max maybe?) and in practice we have seen that most usage of software patents are not valuable to society and many software innovations are released in research journals for free anyways, so the best option is probably just to scrap the idea.
There are examples of it outputting entire complex algorithms that are definitely copyrightable and reasonable to be copyrighted. A recent example is https://twitter.com/docsparse/status/1581461734665367554.
I think copyright can be absurd, and I think it needs to be cut back in a lot of ways. But I think some amount of copyright makes sense and GitHub Copilot sometimes violates what I see as morally correct.
Personally I don’t have any problem with it being trained on copyrighted code. I also think that much of the code produced by GitHub Copilot is “original” and free from copyright. However there are many examples of cases where it spits out verbatim or near-identical copies of copyrighted code. It is clear to me that the code in these cases is still owned by the original owner.
It is identical to human learning. I can read and learn from copyrighted code and write my own code with that newfound knowledge. However if I memorize and re-write code it doesn’t magically make it mine.
Yes, you need to download all transitive dependencies.
But this isn’t dependency hell, it is just tedious. Dependency Hell is when your dependency tree requires two (or more) version of a single package so that not all of the dependencies can be satisfied.
apt is the tool for downloading packages. So if you don’t have internet access
apt won’t be very useful.
The command to install packages on debian is
dpkg. So if you download a Debian package (usually named
*.deb) you can install it with
dpkg -i $pkg as long as you have the dependencies installed. Of course you can also install the dependencies this way, so just make sure that you bring the package and all packages that it depends on to the target machine.
That just seems to be about granting an app access to all keys, which is not quite the same as per-app keys.
I know that macOS has this for sandboxed apps from the app store, maybe they have it for “sideloaded” apps as well but at least most OSes don’t have that. At least for Windows and Linux there isn’t a good way to identify an “app” to separate it from any other. My macOS knowledge is rusty but IIRC you install apps in a system-owned directory and apps only have permission to update themselves so maybe you could use the application path as a key, but the other listed affected OSes don’t have that.
But the malicious npm package can just read whatever key the app reads then decrypt the values. They are running with the same permission.
The only thing that really improves this is per-app sandboxing but if you are sandboxing the app then it shouldn’t be able to read any arbitrary files out of your home directly anyways.
Keychains are an improvement but not much. 99% of users will just unlock the keychain upon login so it doesn’t really provide much benefit. Unsandboxed apps are indistinguishable to the keyring daemon so they can just request one anothers’ keys. (Maybe windows or mac has some codesigning magic so that the keyring daemon knows the identity of the app at a finer grain than the user level? but at this point we are really just back to sandboxed apps).
Basically there is nearly no point to most apps to doing anything special to store sensitive files. If your app is secure enough that the user will be happy to unlock the keychain on every app launch sure. But that is a nearly non-existent use case. In general the OS should just provide secure storage as the default. For sandboxed apps they won’t have access to each others storage unless explicitly granted, for non-sandboxed apps there isn’t much you can do besides obscurity.
Is this an official channel or just a mirror of their YouTube channel?
I don’t get it. Of course the app stores these in cleartext, the app needs to access them to login. Sure it could encrypt it but that is just obscurity, the key would have to be stored to somewhere the app has access to for it to use the tokens.
The article doesn’t seem to say that these were world-readable or otherwise visible to other users. So this seems like mostly a non-story. Use full disk encryption and you’ll be fine.
Communities have RSS feeds of posts. You should just be able to paste the channel URL (such as https://lemmy.ml/c/asklemmy) into your reader. (If your reader doesn’t support auto-discovery there is a feed icon on the channel page).
There are also user feeds. There don’t appear to be feeds for comments on a post or searches but maybe we can see those some day.
I take a slightly different approach to RSS that probably doesn’t work well for everyone but is perfect for me.
I get all of my RSS delivered via email by rss-to-email services. I then use filters to sort these updates into dedicated folders. So for example most of the updates go to “News” some feeds go to “Videos” and so on. I even have a few feeds that go directly to my inbox when I want to know about them right away.
The main benefits are:
The main downside is that I haven’t found an email client that pre-downloads images whereas this is a fairly common feature of dedicated readers. But this is a very minor issue for me. (Maybe I’ll send a patch to K9 some day)
I’ve been using this approach for almost a decade and am super happy with it. In fact I created my own rss-to-email service (FeedMail) in the past year to get exactly the behaviour I wanted. It is a paid service (but really cheap) but there are also ad supported options like Blogtrottr (I used their paid plan until I created my own service).
I really hope this ends well. The K9 dev has always been looking for funding so a full-time job working on K9 must be great. I really don’t care about the name change but hopefully this lets it move quickly.
I like K9 but some things are fairly awkward IMHO. Even just reading a few messages in a row is a lot of clicks. I would love some swipe gestures (there is a PR in progress for this IIUC). I also find the Tier 1/Tier 2 folders very complicated. It is both too limiting (I have more than 3 types of folder) and unnecessarily complex. I would love if we could get some more control here.
But overall it is a good client, so I’m hoping this works well.
I also use Thunderbird on desktop and recent improvements have been very good. I’m hoping that it keeps improving as well.
I’m curious how this works from a Windows host. Does it transfer the windows version and play it on wine? (Even if there is a Linux build available.) Or does it transfer the shared assets but download the difference?