Linuxintro:About
From Linuxintro
Revision as of 13:05, 18 October 2009 by imported>ThorstenStaerk
What is it about
After 8 years of practical Linux, I have some ideas how Linux documentation in the web could get even better. This is why I started this site. Here is what I think should apply to every tutorial on the web.
- There are no experts. Being an expert enables you to learn even more from a given text. As an expert, you read something and you find things that really "everyone should know".
- Be practical. When I searched for "use camcorders with Linux" in the web, I just wanted to know which application to start to connect my camcorder and transfer the video. One article I found was three pages long, started explaining that with Linux, everything is modular and thus better, then went on to a rant about Microsoft, explained how to compile a Linux 2.2 kernel so firewire works and ended with an explanation that Linux is better than Windows. It just forgot to mention which program to start. Always tell the program to start for a given application.
- fetch the reader from where he is. When I was teaching Linux, people were asking how to use ls to find the size of a directory. Now you cannot find this out with ls, you need to use du. But as long as you do not know du, you will not look it up. And as soon as you know du, you will not need to look it up, because you know it already. So, the consequence is, when writing a tutorial on ls, make a link to du in it, about like this:
To find out a directories' size, do not use ls, but du
Also make a link to du into your article on directory.
- Man pages are not enough. Or is there a man page "how_to_install_linux_on_a_usb_disk" and man how_to_use_a_camcorder_with_Linux and man how_to_setup_a_webcam_with_Linux?
- Man pages give you a fraud security to know something. Read e.g. addr2line's man page and tell me why addr2line /bin/bash 200 does not work. You cannot tell from the man page.
- wikipedia is not enough. Wikipedia says about itself it is not a tutorial.
How was it done
I used plain mediawiki software. Addition is the ArticleComments extension. I changed it a bit to fit my needs: allow website links in pages, but not in comments. Because comments can be made by non-logged-in-users while you need to be logged in to write bulk or talk pages:
--- /root/ArticleComments.php 2009-10-18 09:01:30.000000000 +0200 +++ ArticleComments.php 2009-10-18 13:06:40.000000000 +0200 @@ -173,10 +173,10 @@ function wfArticleCommentForm( $title = 'value="'.$title->getNamespace().'" />'. '<p>'.wfMsgForContent($ac.'name-field').'<br />'. '<input type="text" id="commenterName" name="commenterName" /></p>'. - ($params['showurlfield']=='false' || $params['showurlfield']===false?'': - '<p>'.wfMsgForContent($ac.'url-field').'<br />'. - '<input type="text" id="commenterURL" name="commenterURL" value="http://" /></p>' - ). + # (true || $params['showurlfield']===false?'': + # '<p>'.wfMsgForContent($ac.'url-field').'<br />'. + # '<input type="text" id="commenterURL" name="commenterURL" value="" /></p>' + # ). '<p>'.wfMsgForContent($ac.'comment-field').'<br />'. '<textarea id="comment" name="comment" style="width:30em" rows="5">'. '</textarea></p>'. @@ -470,11 +470,16 @@ function defaultArticleCommentSpamCheck( # Run everything through $wgSpamRegex if it has been specified global $wgSpamRegex; + $ourspamregex="/http/i"; if ($wgSpamRegex) { foreach ($fields as $field) { if ( preg_match( $wgSpamRegex, $field ) ) return $isspam = true; } } + foreach ($fields as $field) + { + if (preg_match($ourspamregex, $field)) return $isspam=true; + } # Rudimentary spam protection $spampatterns = array( @@ -502,4 +507,4 @@ function defaultArticleCommentSpamCheck( return true; } -//</source> \ No newline at end of file +//</source>