FrontPage SiteMap RecentChanges HowTo Blog

Matching Pages:

RSS

Egypt, Anniversary of Revolution of 23 July

BannedContentDiscussion

Last edit

Summary: Almost the entire page is outdated. We’re using a Captcha to protect this wiki: Oddmuse:QuestionAsker Extension. In addition to that, we support a BannedContent page that prevents pages from being saved if they contain text matching one of the regular expressions.

Added:

> We're using a Captcha to protect this wiki: [[Oddmuse:QuestionAsker Extension]]. In addition to that, we support a BannedContent page that prevents pages from being saved if they contain text matching one of the regular expressions.
> ----
> == Old ==
> The rest of this page contains outdated information.


We’re trying to fight WikiSpam, here.

We’re using a Captcha to protect this wiki: Oddmuse:QuestionAsker Extension. In addition to that, we support a BannedContent page that prevents pages from being saved if they contain text matching one of the regular expressions.


Old

The rest of this page contains outdated information.

Related pages:

Discussion:

Anti-Spam Network

If you use our format, you can join the network. Basically we all copy our BannedContent from EmacsWiki BannedContent.

Format

See MeatBall:SharedAntiSpam for the format. Basically # introduces a comment, everything else is a regular expression that matches URLs.

Implementation

The script that copies regular expressions from A to B, but leaves the comments at B intact and adds timestamps for the new regular expressions imported:

It uses wikiput, which is available here (for Oddmuse wikis):

References:

Joining

If you want to use this list, just go ahead. It might be easier to parse if you get the plain text:

Cases

Banned Content Reduces Pagerank Case – a web hosting provider wants to get a client of the list…

Discussion

Yes. I appreciate thinking forward in this direction. But I guess you know how supertouchy this is though. See SpamIsInformation.

I am greatly concerned by the inclusiveness of some of the regular expressions used to ban domains. For example, if sex.com were banned like most of the banned domains, it probably would appear as sex\.com on the list. However such an expression would match content like sussex.com, sex.comprehensive-doctors-advice.org or sussex.company-maps.co.uk.

I highly recommend that when banning domains the regular expressions should look something like ”\b([\w\-.] \.)?example\.com\b” instead of example\.com.

– RichardParker?

The script “merge-list” no longer works from my PC. Error msg is “http://www.emacswiki.org/cgi-bin/wiki/raw/BannedContent: 500 Internal Server Error”. “wget http://www.emacswiki.org/cgi-bin/wiki/raw/BannedContent” also fails. But my Firefox browser can load the same page fine. Any ideas? Could user-agent ‘Mozilla/5.0’ possibly be blocked??

My wikis are located on a server that has been having Apache 2 trouble. From time to time it will go down. The machine is on the net, you can send packets to port 80, but the web server never responds, resulting in a time-out. Maybe that explains it? Note that I fixed the wikiput script to only put the page if there is actually any content. My current cronjob used to not test and erase the target page if it got an error from the source page. :/

Greets, I contribute to wiki.s23.org. I was just wondering what is the overhead like loading all the items in the BannedContent page into a wiki engine? Also, I’m confused as to why there are no numeric IPs on the list? If you reply, please add a quick follow up link directed back here on S23-InterNal Cheers --Kunda

I have no numbers on the loading of all items. Two things: I suspect that loading the file eats about as much memory as the file has bytes, and it is only loaded when a page is edited. That doesn’t happen too often. As for IPs, note that this is banned content. There is also a BannedHosts page. The BannedContent applies to the content of pages, not the host/IP of the author.

A BAS LA CENSURE!… DE QUEL DROIT VOUS-CROYEZ VOUS AUTORISES A ETABLIR UNE LISTE NOIRE DE SITES?… SUR QUELS CRITERES?… ET POURQUOI PAS UN AUTODAFE PENDANT QUE VOUS Y ETES?… C’EST DU NAZISME!…

Some people have blacklists larger than the limit for wiki page sizes.

I suggest modifying OddMuse to merge the lists on all pages matching this regexp:

BannedContent\d*

that is, if BannedContent isn’t big enough for you, you could continue the list on page BannedContent2, or BannedContent1234, etc.

Hm, yes…

Thinking about another format. Many WikiEngines do not use Perl and won’t be able to implement such a format. This format is also very poor without extentions possibility. For example, first of all, on my wiki talking about sex, secs.com isn’t a SPAM… Why not use an agnostic format with more properties and extensibility ? I started to make some propositions for a RDF Vocabulary : (it’s in french but the vocabulary is in english) :

Hm. I don’t think people would like to put this much effort into classifying spam. As for your sex.com example, just add it to your local exception list… (I don’t speak french, so I can’t comment on the actual contents of your page) – JorgenSchäfer:2005-02-12 21:43 UTC


The guys are chongqed.org are also maintaining a blacklist: http://blacklist.chongqed.org/ This is based on the spam submissions they get, which are checked manually to ensure only real spammers are listed. There’s a few wikis which take that as their primary blacklist to synch with. Also the dokuwiki software includes it by default. It could be they grabbed regular expressions from here to start the list, certainly it might make sense to organise some more inter-operating in this way. I think automated updating of blacklists can be extremely effective if a spammer attacks one wiki, then finds themselves immediately shut out of many wikis, but it will help if propagating the regexps is something which happens automatically.

I like CharlesNepote’s idea (although I also don’t understand french). At the moment you’ve started forming a little trusted network, but if we can keep track of who (which wiki) added a regexp, this might allow a kind of semi-trusted network to form, in which regexps propagate from the wiki which suffered the first attack, to all other wikis automatically. I also think it would be good to keep track of when a regexp was added, and possibly when it last caught spam. This is useful to identify ones which we dont really need any more. This kind of information does not necessarily require a great deal of human classification effort. On the downside it will require considerable implementation effort, which makes it unlikely to be adopted across many different wiki softwares. See also discussion here. – halz@chongqed

I’ve seen spam containing only links to sites with domain names provided by dynamic dns services or free hosting sites. This is problematic because the URL blacklist works due to economic reasons: Registering new domains costs (a little bit of) money. Either our blacklists will start blocking these, or we’ll have to find another method of banning content.

I’ev been interested in wikis.onestepback.org/Ruse/page/show/FrequentlyAskedQuestions and their idea ofa “TarPit?”. They claim it is working well on an open wiki. (Their wiki engine is a Ruby clone of UseMod)

Define external redirect: TarPit RichardParker

EditNearLinks: CharlesNepote UseMod WikiEngines

Languages: