-
-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add trends UI with admin and user settings #11502
Conversation
7b2cc1c
to
26c229e
Compare
26c229e
to
579d362
Compare
Yes |
Could you be more specific in your reassurance? :D As @maloki said in #7702: "If you really want to make good on this anti-abuse promise, that you just made now right here in this thread, it would be nice if you acted like it when people are putting up MASSIVE warning signs about this being prime for abuse against individual people." People who are vulnerable to abuse need to know exactly how you will minimise and prevent this system being used to abuse them. This is a feature that was previously removed due to significant community opposition on the grounds that it can be used to abuse vulnerable and marginalised groups, so I think more transparency is important. We need to know that you've considered all the concerns and have done things to deal with all the potential issues people raised. Could you please respond to these points that were raised in other issues/PRs, to explain what you've done differently this time? Just so that it's clear that you haven't, for example, closed the previous PR due to a whirlwind of anti-feature discourse and then waited a few months and just reintroduced the same feature when all the vulnerable people have been discouraged enough to leave the platform and therefore no longer have input?
These two are broader issues, that I think a lot of people would also like some reassurance on:
|
Here is a top-level summary of what's different this time: By choosing a whitelist-based approach to trends it is possible to prevent, rather than react-to, harmful information from being broadcast. By adding an option to disable the feature from both the admin's side and the user's side, those who do not want it can opt out of it. Here are the point by point responses:
A hashtag won't trend unless it is allowed to, not the other way around. It can be disallowed again at any time.
Mind that there is a feature request for announcement banners in the interface to which this point would apply (#1209, #11006). Admins can do this right now with announcement posts and adding custom text to their UI.
A hashtag won't trend unless it is allowed to. A hashtag (or public timeline) could be scraped without any trending involved. In fact, it is harder to believe that the Occupy participants were targeted only through a trending hashtag, rather than the hashtag itself.
The feature can be entirely disabled for servers that don't want it. Admins can also individually opt out of getting notifications about unreviewed hashtags.
This fails to acknowledge why the described scenario is uniquely bad with a #pokemon hashtag vs without. Mastodon has a public timeline which is a far more powerful vector for this type of attack due to lack of fragmentation compared to hashtags. This vector of attack is addressed by other, pre-existing moderation tools, such as silencing, suspending, domain blocking, approval-only registration and others.
People have been shitposting under the #mastodev hashtag for a year without any trends involved. I suppose that's technically a competing ideological faction battling for supremacy over a tag. Would the incentive be bigger with trends? Yes, but it does not outweigh the benefits of having people know that the hashtag exists and finding the good content under it.
You can report spam. There is also a body of posts using a hashtag prior to it trending that are the reason it starts trending, which are then presumably definitely not advertising as the hashtag at that point is not yet trending. It is not a reason to bury #mastoart, #introductions, #finefemmefriday, #ff and others.
The way they are determined is not different to how a layperson would define a trend: "It is being used more than usual". It does not work in unexpected ways. Mastodon is "transparent" because it is open-source and displays chronological timelines. Neither of the two things is changed.
This could already be happening. It is probably happening.
An incentive to use hashtags on public posts to categorize them better is the intention. It is hard for users to find content unless they have many friends, and it is hard for users to even know which hashtags exist unless they spend a long time observing the public timeline. There are people who complain about follower counts, the "boost" feature, the "favourite" feature, and the idea of "clout", which could all be considered to be incentives and rewards for unwanted behaviour. However, you should not throw out the baby with the bathwater.
The feature can be entirely disabled for servers that don't want it. Every user can hide it. Everyone left Twitter (or didn't) for their own, different reasons.
By the definition of a trend being "used more than usual", consistently popular hashtags will not receive a visibility boost. It is also perhaps the only visibility of those "popular conversations" for users who are not well-connected and new.
By the definition of a server owner being physically in charge of the server and responsible for what is published there, it is already in the power of "1 or 2 people" to say what is good or not to say in a community, and this is usually outlined in a "code of conduct" that people agree to when signing up.
The utility of this suggestion is questionable as it would increase the workload for admins on already vetted hashtags. If the vetting must be undone, it can be done manually.
Right now this is in the hands of users in the form of choosing whether or not to use hashtags or use a different privacy setting. Since non-Mastodon software already has trends (e.g. Misskey, @gled-rs's "mastodo" fork, as well as 3rd party trend bots), it is difficult to guarantee any additional constraints on what the remote server does with the data you send it.
Non-Mastodon software implementing trends and 3rd party Mastodon bots that post trends based on scraping public timelines show that the cat is out of the bag for this kind of use case. A government can use automation to continuously watch public timelines to determine trending tags on their own. An average user who wants to find out what's up on Mastodon can't. Not having trending tags would not stop governments, but would disadvantage normal users. The "authorized fetch" mode of Mastodon that was introduced recently disallows public access to Mastodon's REST APIs, which would make scraping of public timelines harder by requiring the scraper to have a local account on the server. However, in practice, it would be hard for someone to distinguish a government agent from a normal user.
Yes, which makes it more useful for Mastodon than a bunch of trend bots that inexperienced users don't know about. Due to the whitelist-based nature of the feature, a human is still in the loop for the output of this algorithm.
TERFs and abusive people should be banned from the given Mastodon server, or domain blocked if they have their own. Any features involving communication would be considered harmful if we operated on the assumption that bad actors are left to roam free. The trends in your scenario would potentially help newly arrived vulnerable people find a community on Mastodon. Hope this answers everything. |
Me too! |
The responses to my concerns don't address them. The major improvement, whitelisting, is welcome. However, my other concerns are not adequately addressed and so I continue to oppose this feature. "An incentive to use hashtags on public posts to categorize them better is the intention." Okay but the concerns about gamification and incentivizing unwanted behavior are not addressed, except "don't throw out the baby [trending tags] with the bathwater [creating a new incentive for unwanted behavior]." What's intended isn't worth the side effect of creating of a new incentive to game the content of posts, because it decreases the quality of the everybody's feed. The discoverability of hashtags isn't worth that price. "By the definition of a trend being 'used more than usual', consistently popular hashtags will not receive a visibility boost." That's fine, but not a response to my criticism, which wasn't about "consistently popular" hashtags. The criticism was that things that are popular, even uniquely, don't need the feedback loop of having their popularity reinforced by the UI. I don't need MORE people posting about #bingocards or #feralhogs, when they trend, it already happens plenty on its own. Besides my concerns not being adequately addressed, and though the whitelist approach is an improvement, this still doesn't address the major abuse-related concerns. We have to assume, for example, that malicious admins will use this code, and use its gamifying nature to drive harassment and spread of hate. My being able to turn this off on my instance doesn't turn THAT off. And blocking that instance doesn't contain the damage, because everyone they federate with that has trending turned on will be contaminated by the gamified malice. So I'll have way more instances that need to be blocked. In effect this code will exacerbate the division of the network—instances that allow gamification and instances that renounce it. Not exactly in keeping with the goal to have a unified network. Finally, This would just create a vector of abandoned whitelisted tags for people to abuse. The utility of the suggestion is to avoid exactly that. Why would one assume that a previously whitelisted tag to be free of abuse? It's this sort of casual dismissal of abuse concerns that run through this proposal in order to create a feature, the need for which is dubious in the first place, that makes people skeptical that these things have been well thought through before they make it into the main branch. |
This is bothering me. Yes, other instances may be malicious, but this is the argument that comes up a lot here: "malicious instances will find a way to do the thing anyway, so there's no point us making a feature." It was used against deleting toots, even! "A malicious instance could hold on to toots you've deleted," as an argument against letting people delete their toots and that delete signal being sent out to other instances.
|
The trends API exposes trending hashtags that have been reviewed by the server's staff. A new admin setting can disable the feature (and stop e-mail notifications about hashtags needing review from being sent to the staff). A new appearance preference can hide the trends on the user's side.
Single-column UI:
Multi-column UI: