LINUX GAZETTE
July 2003, Issue 92 Published by Linux Journal
Front Page | Back Issues | FAQ | Mirrors
The Answer Gang knowledge base (your Linux questions here!)
Search (www.linuxgazette.com)
----------------------------------------------------------------------
* The MailBag
* More 2-Cent Tips
* The Answer Gang
* News Bytes, by Michael Conry
* HelpDex, by Shane Collinge
* Ecol, by Javier Malonda
* select() on Message Queue, by Hyouck "Hawk" Kim
* Linux to Save the Health of the World, by Janine M Lodato
* My Open Radio, by Mark Nielsen
* Setting up the mail subsystem in Linux, by Ben Okopnik
* Qubism, by Jon "Sir Flakey" Harsem
----------------------------------------------------------------------
Linux Gazette Staff and The Answer Gang
Editor: Michael Orr
Technical Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti
----------------------------------------------------------------------
TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML.
They are provided strictly as a way to save the contents as one file for
later printing in the format of your choice; there is no guarantee of
working links in the HTML version.
----------------------------------------------------------------------
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright (c) 1996-2003 Specialized Systems Consultants, Inc.
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | The Mailbag |
| ...making Linux just a little more | From The Readers of Linux Gazette |
| fun! | |
+------------------------------------------------------------------------+
----------------------------------------------------------------------
HELP WANTED : Article Ideas
Submit comments about articles, or articles themselves (after reading our
guidelines) to The Editors of Linux Gazette, and technical answers and tips
about Linux to The Answer Gang.
----------------------------------------------------------------------
* BiDi Problems in WINE + SMARTDRAW
* Squid and FTP
* create new lilo boot loader - on 2nd drive
-------------------------------------------------------
BiDi Problems in WINE + SMARTDRAW
Thu, 12 Jun 2003 17:49:30 -0300
Daniel Carneiro do Nascimento (dcn from microlink.com.br)
#sorry about my english... i'ved learned that by myself.. so ..
# U can make some modifications < of course.. it's gpl..> in my english
mistakes
So I did, just a little, though usually we leave questions alone so
people have a sense of how the querent meant things :) -- Heather
Hiya guys..
I have a problem ( d' aah)
I've tried to use SmartDraw under wine.. and then.. after I configure
everything.. It works! At least, I think that, when I see SmarrtDraw
starting.. showing the initial WELCOME.. etc.. but.. when he tries to show
me the initial screen < to chose the objects of my diagram> BUMMER! My
wine DIES.
my log is so big.. and every thing happens about BiDi...
#] warn:font:GetCharacterPlacementW The BiDi algorythm doesn't conform
to Windows'
And then.. BiDi throws a lot of junk < i suppose> in my memory causing
some HEAPS Faults:.
#] warn:heap:HEAP_IsRealArena Heap 0x40db0000: block 0x408acf is not
inside heap
there's not an upgrade for BiDi available.. and.. since November 22.. BiDi
has been going crazy... with some programs that request some kind of..
font.. i don't know...
The HEAP Faults problem.. I solved myself making a bigger "X:/temp" and
includding a new path for junk.. but.. WINE couldn't pass through BiDi,
when it get a crash.. cause the BiDi NEVER stops to send some.. THING. < i
don't know what either.> to the memory.. that fills up.. whatever is your
/temp size! < mine is 2 G!>
I just don't know what to do! I'm really really lost.. and.. I need to
make wine work... it's not for the program itself.. it's for the HONOR!
AHUuhauahh
DO you guys know ANYTHING about that Suddenly Crashing?!? Or..
incompatibility ? Or whatever you call it... ...
Tnkx so much for reading my crappy email...
PS:. .. HEEEEEELP!
Daniel Carneiro do Nascimento
-------------------------------------------------------
Squid and FTP
Fri, 27 Jun 2003 11:26:16 +0300
Nickos Yoldassis (niyo from teipat.gr)
Hi there,
I use squid as a proxy server (default configuration) and it seems that i
can't connect to ftp sites through it. Do I have to do anything?
Nickos, Greece
It appears that this is an FAQ in the land of Squid, number 12.17 -- "Can
I make my regular FTP clients use a Squid cache?"
Nope, its not possible. Squid only accepts HTTP requests. It speaks FTP on
the server-side, but not on the client-side.
The very cool wget will download FTP URLs via Squid (and probably any
other proxy cache).
However, it would be fun to have an article about somebody using Squid
and/or other site caching software in powerful ways to make their site's
view of the web more fun. There are a bunch of add-ons at Freshmeat for
it, so I'm sure someone out there has a great example for us to follow.
Don't forget to read our author submission guidelines. -- Heather
-------------------------------------------------------
create new lilo boot loader - on 2nd drive
Fri, 13 Jun 2003 02:42:27 +0000
Geraldine Koh (geradin07 from hotmail.com)
Hi people, I have a problem......
I'm actually trying to mirror the hard disks using RAID 1 in Red Hat 9.It
can work perfectly but the bug is that i can only boot up the first hard
disk, i suppose lilo is stored as th MBR in it. The second hard disk
during booting up, shows LI and i boot it using a bootup diskette instead.
I'm wondering how to implement lilo in the second HDD in such a way that
it auto boots up just like the 1st HDD.Is it possible?
Is it true that only 1 MBR could be used will it work on 2 MBR in 2
respective hard disks?
I visited the Boot+Raid+Root+ Lilo How to documentation: & i tried this
method to boot up second HDD..but there's error
it is known as a raid LILO config file pair that I implemented:
See attached geraldine.lilo.conf.hda.txt
I created this 2 lilo configuration file but not too sure whether is eing
read anot because i still have a current default lilo file /etc/lilo.conf
See attached geraldine.default.etc-lilo.conf.txt
Bacially that's about all...I hope your gang can resolve my roblem.Sorry
if i bored you to sleep with such a long email. Hope to hear from ya
soon...
Cheers, Geraldine
----------------------------------------------------------------------
GENERAL MAIL
----------------------------------------------------------------------
* Re: Liunx Gazette in Palm Format
* Article Ideas - Semaphores
-------------------------------------------------------
Re: Liunx Gazette in Palm Format
Fri, 30 May 2003 17:47:34 -0400
Ben Okopnik (the LG Answer Gang)
Question by Herbert, James (James.Herbert from ds-s.com)
On Fri, May 30, 2003 at 12:36:02PM -0700, Heather wrote:
[Ben] You can use "bibelot" (available on Freshmeat, IIRC); it's a Perl
script that converts plaintext into Palm's PDB format. I have a little
script that I use for it:
Does the raw PDB format have a size limit? Our issues can get pretty big
sometimes... -- Heather
[Ben] "The Complete Shakespeare" was over 5MB. No trouble at all, except
for uploading it ("jpilot" wouldn't do it; neither would the Wind0ws prog
on my brother's machine. Kudos to "coldsync".)
Plucker is an open source palm document reader and in my humble opinion
THE BEST. There are some really good Linux GUI document converters
available for it.
I checked out site-scooper but unfortunately they are very out of date,
I'll have to look at installing the scripts on my own box.
The issue I have when converting the site manually is that as the site
refences links external to the main document I get duplicate copies of the
articles in one document hence an extremely large file (Issue 91 is 1.98MB
!!)
Anyway thanks very much for your help, I was quite surprised to get a
response for such a trivial question --- thanks again
James
Glad we could help, though I'm disappointed to hear sitescooper isn't
keeping up to date. -- Heather
-------------------------------------------------------
Article Ideas - Semaphores
Tue, 3 Jun 2003 08:34:46 -0700
rwillis (rwillis from ctf.com)
I have done some searching on the internet for semaphores and have found
very little info, and no tutorials. I think that you could use this as a
topic to suppliment your article on Message Queues in Issue-89 (
"Exploring Message Queues, Part I" , Raghu J Menon).
Suggested Sections
1 SystemV Semaphores (semget, semop semctl)
2 POSIX 1003.1b Semaphores (sem_init, sem_wait, sem_trywait, sem_post,
sem_get_value, sem_destroy)
I have heard mention of something called pthread semaphores, but I am
unsure as to what these are, or how to use them.
BTW, SystemV semaphores use key_id (int) which must be unique. ftok() can
be used to hash a key from a filepath and a project id, but there must be
other ways to generate keys...
It would be really nice to see examples of this in action, as that is one
thing that I could not find (exclusively for Linux that is).
Great Magazine!
Thanks,
Richard Willis, B.Eng (EIT)
----------------------------------------------------------------------
GAZETTE MATTERS
----------------------------------------------------------------------
* The things we have to go through to get our articles
-------------------------------------------------------
The things we have to go through to get our articles
Fri, 30 May 2003 17:42:52 -0400
Heather Stern (Linux Gazette Technical Editor)
Ooh, ooh. . . . I used to ... at a former job, and hereby volunteer to
write an article on setting up an equivalent. I may need some shouting
and/or threats of physical violence to overcome my procrastination though.
Black helicopter request has been filed. It'll be right over as soon as
our local operative-in-dark-glasses can fix the autopilot. Of course, if
you finish the article before liftoff, do let us know, and we'll send over
one of Wooner's beautiful dame clients to pick up the package...
Will do. Um... over and out (?)
Heh. One beautiful dame, coming up next article. Watch for long legs,
slinky dresses, and languorous questions about whistling ability. -- Ben
And if you're the sort of person who can fry a good article up sometime
this summer -- to make Linux a little more fun for folks who get dizzy
when they need to know what sorts of barbecue briquettes are used for
firewalls around here - do let us know. We're planning our editorial
schedule to layout how August and September will be released, and having
some articles in ahead of time would be really, really handy. Now I
can't guarantee a personal pick-up by ultra modern black helicopter with
an absolutely gorgeous - shall we say bombshell? - dame flying it, but
we can ask! :) -- Heather
----------------------------------------------------------------------
This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services,
http://www.starshine.org/
Copyright (c) 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | More 2-c- Tips! |
| ...making Linux just a little more | By The Readers of Linux Gazette |
| fun! | |
+------------------------------------------------------------------------+
See also: The Answer Gang's Knowledge Base and the LG Search Engine
----------------------------------------------------------------------
* Backup Software: Robustness
* can I have Linux on a ThinkPad G40? with WinXP?
* Re: [Blt-newuser] Request suggestion for ftp server --or--
FTP Daemons (Servers) and Alternatives: Just Say No?
* Pause after running xterm
* Tips on PDF conversion
* quotas on directories?
* What is Reverse DNS?
* Subscribe to groups...........pan,Knode.......????
* Confused about symantics of "mount -o,async/sync" commands
* Linux Journal Weekly News Notes - Tech Tips
-------------------------------------------------------
Backup Software: Robustness
Mon, 2 Jun 2003 08:09:29 +1000
Nick Coleman (njpc from ozemail.com.au)
This is a reply to a letter to the Mailbag in the June 2003 issue of Linux
Gazette, compressed tape backups
quite a while back I remember a discussion on compressed tar archives on
tape and the security risk, i.e. the data would be unrecoverable behind
the first damaged bit.
Now at that time I knew that bzip2, unlike gzip, is internally a blocking
algorithm and it should be possible to recover all undamaged blocks after
the damaged one.
Your correspondent may like to look into afio instead of tar for backups.
I believe it recovers from errors much better. The mondo rescue tool
developer uses it.
Regards,
Nick Coleman
[JimD] The problems recovering tar files are worst with GNU tar
operating on gzip'd archives. star (by Joerg Schily, of cdrecord and
mkisofs fame) cpio, and pax are all better at resynchronizing to the
archive headers past a point of file corruption than GNU tar.
afio might very well be better that cpio. I don't know, I neither run my
own tests nor perused the code.
In general I'd suggest that added redundancy (both through ECC -- error
correction coding -- and additional separate copies) is the better way
to make one's backups more robust.
I've heard that BRU (backup/recovery utility: http://www.tolisgroup.com
a commercial product) adds ECC and checksum data to the archive stream
as it performs backups --- and defaults to verifying the archive
integrity in a second pass over the data. With cpio, afio, tar, star,
dump/restore and pax you have to write your own scripts to perform the
verification pass. (cpio and presumably afio do add checksums, GNU tar
doesn't, I don't know about the others). So far as I know none of the
common free tools adds additional ECC redundancy to their archives.
There is an obscure little utility called 'ras' (redundancy archive
system) which can be used to create a set of ECC (sum) files to go with
set of base files and allow one to recover from the loss of a subset of
base files. This is essentially a utility to manually (and crudely)
perform the same sort of redundancy operations as a RAID5 subsystem.
http://www.icewalkers.com/Linux/Software/52890/ras.html
However, I should warn that I haven't used this at all much less tried
to integrate it into any sane backup/recovery scripts!
So far the best free backup tool for Linux still seems to be AMANDA
(http://www.amanda.org ) though Bacula (http://www.bacula.org ) seems to
have a similar and impressive feature set.
AMANDA still uses native dump and/or GNU tar to actually perform the
backup. It initiates those processes on each client, aggregates their
archives on a central server and manages the process of writing them out
to tapes (optionally using a tape changer).
Thus, AMANDA is tape centric and still has the inherent risks of the
underlying archiver (vendor's dump --- dumpe2fs for Linux, or GNU tar).
I think it would be neat if AMANDA or Bacula were integrated with ras or
some redundancy library in some meaningful way.
There is an overview of these and other free backup packages for UNIX
(and Linux) at:
http://www.backupcentral.com/free-backup-software2.html
Ultimately you'd want to keep multiple generations of data backups even
if you knew that you had perfect ECC, redundancy, media, and drives. You
need this for the same reason you need backups regardless of how
sophisticated and redundant your RAID array is configured. Because you
may find that your software or your users corrupt your data, and you may
need to back off to earlier, known good versions of the data, possibly
days, weeks, even month after those backups were made.
(Some forms of corruption can be subtle and insidious).
-------------------------------------------------------
can I have Linux on a ThinkPad G40? with WinXP?
Thu, 05 Jun 2003 18:35:32 PST
borejsza (borejsza from ucla.edu)
Hi,
I am about to buy a laptop and am looking for advice as to its
compatibility with Linux.
I know little about computers (last time I owned one it was a Commodore
64), and less about Linux, but saw a friend use it, and would like to
learn how to myself, and gradually move away from Windows. The laptop I am
thinking of buying is an IBM ThinkPad G40
(http://www-132.ibm.com/webapp/wcs/stores/servlet/ProductDisplay?productId=8600909&storeId=1&langId=-1&categoryId=2580117&dualCurrId=73&catalogId=-840).
I think it is a new model, and could not find it anywhere on the pages
that list hardware that has been already tried out with Linux.
Can anybody confirm that I can partition that laptop between Linux and
WindowsXP before I blow all my savings on it?
Thanks,
Alex
You could buy one preloaded from EmperorLinux:
(http://www.emperorlinux.com/auk.html) -- Ben
Or they'll preload a dual boot, or can customize. (So this tip is good
for more than that one model.) -- Heather
As far as I'm concerned, IBM-made hardware today should be a sure bet
for Linux anyway: they've really thrown themselves behind Linux in a big
way, and I'd be surprised to hear of a laptop they make that can't run
it. Come to think of it, given the range of hardware that Linux supports
these days, making a 'top that can't run Linux would be quite a trick in
the first place. -- Ben
[jra] Now, that's not to say that you can easily dual-boot XP. There may
be reinstallation issues, and licensing; I don't know that Partition-*
or FIPS can safely resize whatever you have loaded without breaking it,
and you may not have "install" media for XP -- only "recover" media,
which will not let you install on a resized partition.
Missing install media for WinXP isn't relevant to its ability to coexist
with Linux, but personally, if my vendor "forgot" to include the Other
OS that I had paid for - I'd demand my real discs, or that they discount
the box the price of their OS. Given the number of people competing for
your business in this venue, I have precious little tolerance for that
kind of ripoff. -- Ben
[jra] I would google for "linux win xp dual boot howto", and see what I
got. -- jra
[Kapil] Apparently, the trick is to: (1) Install Linux and resize the
NTFS partition (2) Boot the recovery CD for XP (3) Interrupt (count 5
:-)) the reinstallation process and run "OS.bat". It seems XP will then
"just install" on the resized partition.
This worked with the laptops bought for our Institute. YMMV.
-- Kapil.
-------------------------------------------------------
FTP Daemons (Servers) and Alternatives: Just Say No?
Tue, 3 Jun 2003 06:03:09 -0700
Jim Dennis (the LG Answer Guy)
Question by Dinos Kouroushaklis on the BLT-newuser list (Blt-newuser from
basiclinux.net)
Dear list members,
I would like to hear your suggestions for an ftp server.
I would like to replace an existing win2k ftp server with a Linux based
one. What I am interested in is reliability and ease of management. The
machine should need only one (maybe more) ethernet card to provide the ftp
service (except during installation time). The two ethernet cards can be
use one for management and one for the traffic.
The machine will be an Intel Celeron 400 Mhz with 160 (128+32) and 20 GB
hard disk with a public (static) IP address in the DMZ.
Regards
Just to be contrarian I have to suggest that you seriously consider
abandoning FTP entirely. HTTP is adequate for simple, lightweight
anonymous distribution of files (text or binary). scp, sftp (SSH) and
rsync over ssh are inherently more secure than plain FTP can ever be. Your
MS-Windows users can get Putty (and pscp, et al.) for free.
(Plain, standard FTP will, by dint of the standards, always pass user name
and password information "in the clear" across the Internet --- those
exposing these valuable, private tokens to "sniffers"). For some purposes
BitTorrent can be far more efficient (for widespread, peer assisted
distribution of files to many concurrent clients, for example).
SSH, scp, and sftp:
http://www.openssh.org
Putty:
http://www.chiark.greenend.org.uk/~sgtatham/putty
rsync:
http://www.samba.org/rsync
BitTorrent:
http://bitconjurer.org/BitTorrent
If you can, just eliminate FTP and direct your users and customers to
better alternatives.
In general the problem with FTP servers is that they run as root (at least
during the authentication phase, if they support anything other than
anonymous FTP). So FTP daemons have classically been a source of
vulnerability (as bad as DNS -- BIND/named --- and MTA -- sendmail ---
daemons).
With that in mind, vsftpd would probably be my first free choice.
(http://vsftpd.beasts.org )
ProFTPd is popular, and has configuration file syntax that's a vaguely
similar to Apache/HTML/SGML (I'll leave it for others to judge that a
feature or bug). However, ProFTPd is complex and has had too many security
alerts posted against it for my tastes. (http://www.proftpd.org ).
WU-FTPD (for years the default that shipped with most Linux distributions)
has the worst security track record in the field. I wouldn't recommend it,
I don't care how many bugs they've patched. There comes a time to abandon
the codebase and start from scratch. There also comes a time when "brand
recognition" (the project's name) shifts from notoriety to notorious
infamy.
By contrast, Chris Evans coded vsftpd specifically to be as secure as
possible. He discussed the design and every pre-release of the code
extensively on the Linux security auditing mailing list (and in other fora
devoted to secure programming and coding topics).
If you're willing to go with a commercial/shareware package (that's not
free) I'd suggest that Mike Gleason's ncftpd has been around longer than
vsftpd and still has a very good track record. (http://www.ncftpd.com ).
Registration is only $200 (U.S.) per server for unlimited concurrent
connections ($100 for up to 50 concurrent users) and is free for use in
educational domains.
If there are no objections I'd like to cross-post this to the Linux
Gazette for publication (names of querents will be sanitized) since the
question comes up periodically and I like to refresh this answer and the
URLs.
All of this assumes that you have no special needs of your FTP server. If
you need special features (directory trees restricted by user/group info,
pluggable authentication support, virtual domain support, etc) then you'll
have to review these products more carefully. However, each of them offers
at least some virtual domain/server functionality and a mixture of other
features.
[Dan] For a comprehensive annotated list, see:
http://linuxmafia.com/pub/linux/security/ftp-daemons
Everybody's got their favorite, and mine's PURE-ftpd, of which Rick Moen
of Linuxmafia says on the above page:
Seems like a winner.
http://sourceforge.net/projects/pureftpd
-------------------------------------------------------
Pause after running xterm
Fri, 30 May 2003 20:39:56 -0400
Ben Okopnik (the LG Answer Gang)
Okay, so it's a nickel's worth. So there. -- Heather
Here's a little problem you might run into: you want to run a certain
program - say, as a Mozilla "Helper application" - which needs to run in
an xterm. So, you set it up like so:
xterm -e myprogram -my -options
The only problem is, when it comes time to run it, all you see is a flash
as the xterm appears, then immediately disappears. What happened? What
error did it print out? Why (this does happen at times) does it work when
you launch it 'manually' but not from Mozilla?...
Here's an easy and useful solution that will require you to hit a key in
order to exit the xterm after the program has finished running. Note that
it may fail on tricky command lines (subshell invocations, evals, and
other shell-specific gadgetry) but should work fine with normal commands
and their options.
See attached okopnik.hold.bash.txt
Invoke it like so:
xterm -e hold myprogram -my -options
[jra] Were you actually planning to answer those question, Prof?
Or are they left as an exercise for the students? :-)
[Ben] The answer is implicit in the solution provided, and will depend
on the specific program being launched. The implementation, as always,
is left to the student. Giddyap, dammit. :)
[JimD]
xterm -e /bin/sh 'myprogram -my -options; read x'
... in other words, have a shell execute your program, then read a dummy
value from the xterm (the xterm process' console/terminal/stdin)
The command will run, output will be displayed, you'll get a pause where
you can type anything you like (also allowing you to scroll through the
xterm's buffer). When you hit [Enter] the xterm goes away.
Seems pretty transparent to me. More verbose:
xterm -e /bin/sh 'myprogram -my -opts; echo "[Enter] when done: ";read x'
More elegant, create a two line script:
See attached jimd.pauseadter.sh.txt
(I'm not really sure we need the eval, but I don't think it'll hurt in
any case).
Now simply:
xterm -e pauseafter.sh myprogram -my -opts
(/me shudders at the electrons that got excited by this blatantly
obvious suggestion).
-------------------------------------------------------
Tips on PDF conversion
Thu, 12 Jun 2003 12:12:55 +0100 (BST)
Mike Martin (the LG Answer Gang)
Has anyone any ideas on converting PDF's to decent text.
To explain
I have a document which has been scanned in, with the only accurate
conversion being to pdf (no images)
So I have used pdf2ps which gives me ps file.
However then when I use psto... anything text like, the output is exactly
^L
Any ideas/tips?
[Thomas] If you could convert the pdf to ps and then to LateX then you
won't have a problem since tex -> ascii is not a problem. However, going
from ps to ascii might require some more thought.
I know that there is a utility called "a2ps" which takes ascii and
converts it to a ps file, however I cannot see a converse one program.
I am sure that there is a perl module (hey, Ben!) that could be used to
write a perl-script for such a task, however, I am going to suggest you
try the following......(I haven't tested this):
strings ./the_ps_file.ps | col -b > ~/new_text_file.txt
I am shunting this through "col" since you describe having lots of "^L"
characters. You might have to edit the file by hand as well, since I am
sure that a lot of useless information is being processed.
[Ben] See the "pstotext" utility for that.
[Andreas] There's a utility called pdftotext, it is in the xpdf Package,
see the xpdf-Homepage http://www.foolabs.com/xpdf
Hopefully an OCR has been performed on your scanned document before it
was converted to pdf, otherwise the pdf file would just contain an image
and could not directly be converted to text.
Unfortunately, and very annoyingly this is what seems to have happened,
seriously aggravating software - it lies.
Off to to see if I can work out how to convert the image to text (its only
tables)
[Ben] Well, if it's a picture, "pstotext" won't help. Oh, and don't
bother with "strings" on a .ps file: it's all text.
[Robos] Hmm, I ran into some ocr discussion lately and found this: gocr
and claraorc (http://www.claraocr.org). The latter one seems to be more
evolved...
-------------------------------------------------------
quotas on directories?
Tue, 3 Jun 2003 19:55:26 +0200
Emmanuel Damons (emmanuel.damons from enterpriseig.com)
Answered By Thomas Adma, Jim Dennis, Kapil Hari Paranjape
Hi
Can you help me I need to specify the size that a folder can grow. almost
like the quotas for folder and not users
Thanks
[K.-H.] spontaneous idea, especially if this is for one folder only:
create a partiton of exactly right size and mount it at mountpoint
"folder". If creating a partition is not possible use a file and mount
it a loop device.
[JimD] In the same concept you could use regular files with the loop
mount option to create "partitions" of this sort.
Example:
dd if=/dev/zero of=/mnt/images/$FOLDERNAME bs=1024 count=$SIZE
mkfs -F /mntimages/$FOLDERNAME
mount -o loop /mntimages/$FOLDERNAME $TARGET
Where:
FOLDERNAME is an arbitrary filename used as a "loopback image"
(the container that the loop block device driver will treat
as if it were a partition)
SIZE is the desired size in kilobytes
TARGET is the desired location of the "folder" (the mountpoint for
this filesystem).
You can use any of the Linux supported filesystem types (ext2, ext3,
minix, XFS, JFS, ReiserFS) and you can tune various options (like the
amount of reserved space on such "folders" and which UID/GID (user or
group) that space is reserved for. You should be able to use quotas,
ACLs and EAs (extended attributes and access control lists) (assuming
you've patched your kernel for ACL/EA use and enabled it) etc.
Obviously this approach as a couple of downsides. You need intervention
by root (or some sudo or SUID helpers) to create and use these images.
[Kapil] Of course, you can use User-mode-linux to create and use these
images.
[JimD] Also Linux can only support a limited number of concurrent loop
mounts (8 by default). Newer kernels allow this as a module parameter
(max_loop=<1-255> ... so up to 255 such folders maximum on the system).
This limits the number that could be in concurrent use (though an
unlimited number of these "folders" could be stored on the system,
mounted and unmounted as needed).
There might be other disadvantages in performance and overhead (I'm not
sure).
[Kapil] That would be a downside with UML if you use the file systems
with UML.
[JimD] On the plus side you could have any of these encrypted, if you're
running a kernel that's had the "International crypto" patch applied to
it; and you pass the appropriate additional options to the mount
command(s). We won't address the key management issues inherent in this
approach; suffice it to say that almost forces us to make mounting these
filesystems an interactive process.
If you wanted to have a large number of these, but didn't need them all
concurrently mounted you might be able to configure autofs or amd
(automounters) to dynamically mount them up and umount them as the
target directories were accessed --- possibly by people logging in and
out.
There are probably better ways, but this seems to be the most obvious
and easiest under Linux using existing tools.
[Kapil] One solution (rather complicated I admit) is to switch over to
the Hurd which allows such things and more complicated things as well.
Another is to use "lufs" or other "Usermode filesystems". These put
hooks in the kernel VFS that allow one to set up a "user mode" program
to provide the "view" of the part of VFS that lies below a particular
directory entry.
[JimD] The very notion of limiting the size of a "directory tree"
(folder) is ambiguous and moot given the design of UNIX. Files don't
exist "under" directories in UNIX. Files are bound to inodes which are
on filesystems. Filenames are links to inodes. However every inode can
have many links (names). Thus there's an inherent abiguity of what it
means to take up space "in a folder" (or "under a directory"). You could
traverse the directory tree adding up all files (and the sizes of all
directories) thereunder (du -s). This works fine for all inodes with a
link count of one, and for cases where all of the inodes are within the
scope of the tree (and assuming there are no mount points thereunder).
However, it's ambiguous in the general case and begs the question: just
what are you trying to accomplish.
[Kapil] Excellent explanation Jim.
-------------------------------------------------------
What is Reverse DNS?
Mon, 2 Jun 2003 20:37:46 EDT
(jimd from mars.starshine.org)
Question by TEEML914 (TEEML914 from aol.com)
I'm doing an assigment. Can you tell me in laymans terms what reverse DNS
is?
[Faber] Yes, we can.
Thank you and have a great day
[Faber] You're welcome and have a spiffy night yourself..
[JimD] Faber, I think your cheerful sarcasm might be lost on him. After,
he's dense enought to take such a simple question (from his homework
assigment, no less) and go to all the trouble it of asking us
Yes, we can tell you. We can answer such questions. With dilligent work
(as in DOING YOUR OWN HOMEWORK) you'd be able to answer questions like
that, too.
For everyone else who hears this buzz phrase and wonders about it
(people who aren't trying to skate through classes so they can make
complete idiots of themselves when they enter a job market thoroughly
unprepared by the schooling they shirked):
+--------------------------------------------------------------------+
| ............... |
| |
| "reverse DNS" is the process of asking the DNS (domain name |
| system) for the name associated with a given IP address |
| (which, of course, is numeric). Since DNS is primarily used to |
| resolve (look up) an address given a name; this numeric to |
| symbolic lookup is the converse operation. However, the term |
| "converse" is somewhat obscure so the more literate and |
| erudite among us are stuck with the phrase: "reverse DNS." |
| |
| On a technical level, a reverse DNS query is a question for a |
| PTR record in the in-addr.arpa domain. For historical reasons |
| the in-addr (inverse address) subdomain of the "Advanced |
| Research Projects Administration" (the forebear of the Internet) |
| is reserved for this purpose. For technical reasons the four |
| components of a traditional "dotted quad decimal" representation |
| of the address are arranged in reverse order: least significant |
| octet first. This allows the most significant octets to be |
| treated as "subdomains" of the in-addr.arpa domain which allows |
| delegation (a DNS mechanism for administrative and |
| routing/distribution purposes) to be down on octet boundaries. |
| |
| Of course any good book on DNS will provide all of the gory |
| details, or one could simply read the RFCs (request for comments |
| documents) which are the normal mechanism by which standards are |
| proposed to the IETF (Internet Engineering Task Force) which |
| marshalls them through a review and vetting process, publishes |
| them and recommends their adoption. (Since the Internet is still |
| basically anarchial the adoption of new standards is essentially |
| a ratification process --- each Internet site "votes with its |
| feet" as it were). |
| |
| In particular it looks like you'd want to read RFC3172: |
| http://www.faqs.org/rfcs/rfc3172.html |
| |
| ............... |
+--------------------------------------------------------------------+
Please have your instructor send my extra credit points c/o Linux
Gazette and be sure to have him give you a failing grade in your TCP/IP
or Internet/Networking Fundamentals class.
(In the unlikely event the assignment was to explore the use of sarcasm
by curmudgeons in the Linux community --- then bravo!)
-------------------------------------------------------
Subscribe to groups...........pan,Knode.......????
Wed, 25 Jun 2003 20:21:12 +0530
Vivek Ravindranath (vivek_ravindranath from softhome.net)
Answered By Dan Wilder, Karl-Heinz Herrmann, Anita Lewis, Ben Okopnik,
Jason Creighton, Heather Stern
Hi Answer Gang,
Can please tell me how to subscribe to linux groups
[Dan] You might start by pointing your browser (konqueror, mozilla,
lynx, w3m, netscape, and so on) at:
http://www.tldp.org
and browse what's there. Then look at
http://www.linuxjournal.com
http://www.linuxgazette.com
http://www.lwn.com
http://www.linuxtoday.com
http://www.slashdot.com
Then you might come back and explain in somewhat more specific terms
what you're trying to do. There are lots of Linux websites, including
documentation, news, online discussions; to get to any of those, you
just click on links.
For e-mail discussion groups you mostly have to subscribe. How you do
that depends on what group you're interested in. Once you're subscribed,
any email you send to some submission address is duplicated and sent to
all subscribers.
Many discussion groups have their archives open. For example, point your
browser at
http://www.ssc.com/mailing-lists
for an overview of mailing lists hosted by SSC, publishers of Linux
Journal.
From that page you can click on list information pages and get to list
archives by following the links. The list information pages also let you
apply for membership in the lists. Normally you'll get a confirming
email back, and your list membership goes into effect when the list
management software receives your reply.
such yahoo groups ,
[Jason] Well, "Yahoo groups" are just email lists, so you can subscribe
to them and read them offline. Same deal for any mailing list.
google groups .......
[Jason] Now for newsgroups (What you call "google groups". Google groups
is actually a web interface on top of usenet.) I use leafnode (Sorry,
don't have the URL, but a google for "leafnode usenet server" would
probaby turn up the homepage.) for this. It's an easy to configure
(IMHO) usenet server that only downloads messages in groups that you
read.
and download all messages for offline viewing using pan or knode or any
other software (Please mention the name of the software and URL).I wan't
to view the messages offline.
First of all I dont know whether it is possible.Can you suggest any other
methods to do so? By groups I mean any linux group, please suggest any
good linux groups if possible...and please give the address that is to be
entered in the address field of the viewer and other details.I just want
to get regular information regarding linux........thanks in advance.
Vivek.
[K.-H.] for the offline reading: I'm not quite sure what "linux group"
you are talking about. If you want to have a look at linux websites as
suggested wwoffle is very useful for caching of webpages so you can view
them at leasure offline. Any new link you click on will be remembered
and fetched next time online. If you talk about news groups (usenet)
like: comp.os.linux.* I am using [x]emacs newsreader "gnus" which has a
offline feature called "agent". You can read the info pages to this but
if this is your first contact with news and [x]emacs then I can not
recommend this wholeheartedly -- gnus itself is rather complex and
therefor powerful (or is it the other way round?). Agent is an
additional layer of complexity which takes time to get used to.
pan I don't know,
It's a newsreader, whose name might offend a family publication, but
which is nonetheless supposed to be very nifty. -- Heather
knode I can only guess is the kde version of a newsreader. If they
support offline features I've no idea. There are other newsreaders: nn,
tin, ... but as far as I know all miss the offline feature. netscape has
a newsreader with rather limited offline capabilities but for a first
try that might be sufficient.
[Anita] Do you mean that you would subscribe to a mailing list on
yahoogroups and then go there and download their archives? That is
something I would like to know how to do too, because we had a list
there and changed to our own server. I'd like to be able to get those
old messages. Well, in truth, I would have liked to have had them, but
now I think they are too obsolete. Still, I wouldn't mind having them,
especially if I could get them into mbox format.
[Faber] Couldn't you use something like wget in a Perl
script to download the archives by links? Ben could probably write a
one-liner to do it. In his sleep. :-)
[Ben] Actually, it would take some tricky negotiation, Web page
downloading and parsing, etc. - it's a non-trivial task if you wanted to
do it from scratch. "Yosucker" from Freshmeat is a good example of how
to download their Web-only mail; it wouldn't be too hard to tweak for
the above purpose (it's written in Perl.)
[Jason] You could probably just use wget, with some combination of -I
and -r. The thing is a HTTP/FTP shotgun.
[Ben] Nope. Remember that you need to log in to Yahoo before you can
read the stuff; after that, you get to click on the message links (20
per page or so) to read them. If it was that easy, they wouldn't be able
to charge you for the "improved" access (which includes POP access to
your mail and a bunch of other goodies.)
[Jason] Actually, I was thinking of download from an online mailing list
archive, not logging into Yahoo.
Perhaps a little specific encoding with lynx' ability to pick up its
transmission data from stdin ... -get_data. It's your login, so you'll
need to guard your password in that packet from prying eyes. Like Ben
says, tricky, but certainly it can be done. -- Heather
-------------------------------------------------------
Confused about symantics of "mount -o,async/sync" commands
Thu, 12 Jun 2003 21:30:21 -0700
Bombardier System Consulting (bombardiersysco from qwest.net)
Answered By Karl-Heinz Herrmann, Thomas Adam, Ben Okopnik, Jim Dennis, Jay
R. Ashworth
Hello,
I am taking a local Linux certification class and seem to have offended my
instructor by questioning the semantics of the "sync" and "async" options
in the mount command. They seem backward to me and I don't understand what
I am missing.
The following are the definitions that I found online and understand for
the words:
Synchronous (pronounced SIHN-kro-nuhs, from Greek syn-, meaning "with,"
and chronos, meaning "time") is an adjective describing objects or
events that are coordinated in time. (within the context of system
activities I associate synchronous with either being timing based or
requiring an acknowledgement)
Asynchronous (pronounced ay-SIHN- kro-nuhs, from Greek asyn-, meaning "not
with," and chronos, meaning "time") is an adjective describing objects or
events that are not coordinated in time. (within the context of system
activities I associate asynchronous with being event/interrupt driven).
It has been my experience and is my understanding with disk caching that
data that is released to the system to be written to disk is kept for a
specific time or until the cache is full before being written to disk.
Hence synchronous. It is my experience and is my understanding that data
from an application which is released to the system and is directly
written through to disk is done so in an asynchronous or event driven
manner.
[K.-H.] synchronous -- applications intent to write data and actual
write are at the same time
asynchronous -- applications intent to write and actual write are not at
the same time as system decides when to write the cached data
[Thomas] These options are really useful in /etc/export if you ever need
to mount directories over NFS, too. Although just don't specify them at
the same time as each other!
[Ben] Yup. The latter is more efficient, since it allows the hevy
lifting to occur all at once (one way to look at it is that the
"startup" and the "wind-down" costs of multiple disk writes are
eliminated - you "pay" only once), but is a little less secure in the
sense of data safety - if your computer is, say, accidentally powered
down while there's data in the cache, that data evaporates, even though
you "know" that you saved it.
This is evidently opposite of the way that the terms are understood and
used in Linux. Please help me understand.
Thanks,
Jim Bombardier
Put simply, ... you're wrong.
"sync" in the Linux parlance (and in other disk buffering/caching
contexts with which I'm familiar) means that the writes to that
filesystem are "synchronized" out to the disk before the writing process
is scheduled for any more time slices. In other words, upon return from
a write() system call the write as occurred to the hardware device.
This usage is consistent with the traditional meaning of the 'sync'
utility (part of all versions of UNIX I've used and heard of). The
'sync' utility forces the kernel to "synchronize" its buffers/caches out
to the device.
"async" means that writes are happening asynchronously to the ongoing
events in the process. In other words mere return from the function call
doesn't indicate that the data is safely flushed to the device.
Note that use of sync is strongly discouraged by kernel luminaries
(Linus Torvalds in particular). I sometimes choose to over-ride their
better judgement myself --- but I do so only with considerable mulling
on the tradeoffs. In general you're better off with UPS (uninterruptable
power supply) and a journaling filesystem than you'll ever be by trying
to force synchronous writes for an entire filesystem.
Of course, with open source packages you can opt for aggressive explicit
synchonization of selected file descriptors using the fsync() function.
Note that this can lead to poor overall system performance in some
cases. For example MTAs (mail transport agents) and syslogd both make
extensive use of fsync(). If they share the same filesystem (/var/log
and /var/spool are on a single volume) it can make the entire system
feel sluggish under only moderate mail handling load (as each mail
delivery logs several messages and each of those processes runs its on
fsync() calls.
[jra] You know, the way I've always interpreted this is that it
describes the coupling between the application program's logical view of
the disk contents and the actual physical, magnetic contents of the
drive, across time:
those views are either mandated to stay in "sync" -- no buffering; if
the OS says it's written, it is on the platters, or they're "async" --
the OS is permitted to "cheat" a little bit in between when it tells
the app "it's written" and when it actually happens.
I guess it's really just another way of phrasing the same thing...
-------------------------------------------------------
Linux Journal Weekly News Notes - Tech Tips
Mon, 23 Jun 2003 01:29:49 -0700
Linux Journal News Notes (lj-announce from ssc.com)
Cut Them Off At The Pass
If someone's script is going haywire and making too many connections to
your system, simply do:
route add -host [hostname]
...to keep the offending box from shutting yours down entirely.
-------
Log A Lot Less
You can turn off syslogd's MARK lines by invoking them with -m 0. You can
put this invocation in the init script that starts syslogd. This is
especially useful on laptops to keep them from spinning up the hard drive
unnecessarily.
-------
Watch a Bit More
Using the watch command, you automatically can run the same command over
and over and see what changes. With the -d option, watch highlights the
differences. Try it with watch -d ifconfig.
-------
Rooting Around with LILO
If you are working from a rescue disk with your normal root partition
mounted as /mnt/root, you can reinstall the LILO boot sector from your
/etc/lilo.conf file with lilo -r /mnt/root. This tells LILO( 8) to chroot
to the specified directory before taking action. This command is handy for
when you install a new kernel, forget to run LILO and have to boot from a
rescue disk.
-------
Removing Files Starting With Dashes
If you want to remove a file called -rf, simply type rm -- -rf. The --
tells rm that everything after -- is a filename, not an option.
The LG staff note that ./ (dot slash) preceding the offending filename
is effective too, and works even on older versions of rm - or non linux
systems - that may not have this handy option. -- Heather
-------
Any Program Can Learn To Read
If you have a program that reads only from a file and not from standard
input, no problem. The /proc filesystem contains a fake file to pass such
programs their standard input. Use /proc/self/fd/0.
----------------------------------------------------------------------
This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services,
http://www.starshine.org/
Copyright (c) 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
----------------------------------------------------------------------
Contents:
P:: Greetings From Heather Stern
(?)Important: Apache install problem
(?)couple of questions regarding printing on Linux, pls
(?)New hard drive
(?)Redhat 7.2 upgrade to Redhat 9.1 without booting from a disk
-------------------------------------------------------
(P:)Greetings from Heather Stern
Hello everyone and welcome once more to the world of The Answer Gang. This
is a time of holiday in the United States as it celebrates its
Independence Day holiady -- nowadays mostly an excuse to go picnicking,
and enjoy a lot of professional fireworks.
Let me see, in 1996 it was theoretically possible to declare your
indendence from Microsoft - but really, desktop Linux still had a long way
to go. Had PGP even been invented yet? We had spreadsheets all over the
place, some work on a nice little TeX based word processor --
interoperability still needed a lot of work. On the flip side, Linux was
invisibly serving a lot of systems out there, as was FreeBSD, because
sysadmins and engineers stuck with a problem to solve and folks breathing
down their necks about it, could sneak in a small pentium and just
apologize later, knowing the bosses would be just plain unlikely to unplug
it after it had been running for a month, saving their bacon.
I'm sorry we're late with the current issue. Life's been a bit hectic (is
this any surprise?) and a few changes are going on under the hood. Not
only that but Murphy's Law seems to have it in for me...
I finally had to upgrade that 386 I've been so proud of for years - I
tried to bribe it with a new power supply, and everything. Finally I
brought it out of the server closet on a link in my open lab. Some stray
keyboard call and boom, dead as a doornail. If the keyboard controller
chip goes, there's just nothing more your can do about the motherboard;
take its memory and cpu and math-coprocessor and sell them on eBay, and
make the empty motherboard a downrange missile target. Don't worry, I'm
still your well known curmudgeonette! It's now on a 486 overdrive chip and
sounding pretty good.
I had to move a client system during that window of sanity between "the
new drop has arrived" and "can we do this over a weekend so DNS can get
over it's confusion while we're not looking?" Guess when that put it - you
guessed it, deadline week. Luckily this doesn't happen too often. Even
more luckily their successful transition to new IP numbers is one of the
fastest I have ever seen. I need more clients like this one :D
I had mentioned that my Star Trek free software user group has been doing
internet lounges. At least that went well - we had a great time at this
last one, only toasted two monitors (sigh, this happens to old spare
monitors occasionally) and people are just in love with Knoppix. I can
tell you, it's not the icons, because nobody reads tooltips, or reads icon
labels. It's the not having to login, and if anything goes wrong - or they
are worried about privacy, or basically ANYTHING - they can just reboot
it. Whee! I do have plans to play with Sunil's customization tricks and
probably nail down some of the real FAQ generators. But overall, I say
live CD based distros are really nice. Mind you, we did have to have a
couple of machines that could play CDs to make everyone happy. I'd score
it as a big win for Linux though.
The spam that has been leaking through is particularly silly. Some bot
must think we're a "Gang" in the Hell's Angels sense, because now we're
getting offers to sell us motorcycle gear. Lemme see, most of us already
own leather jackets, and the ElfOS guy can't get a finer bike to ride. So
sorry, guys. Then we have the hits on homework, offering us educational
discounts... for a special you can get this week, if you answer two weeks
ago. Hope that class project was a TARDiS. D'oh!
I've tried a bit of an experiment this month; the longer Tips that you are
seeing are the shorter kind of answers that used to fill pages and pages
of The Answer Guy column. We have some of the small ones stacked up, but
these looked pretty useful. Of course, TAG is filled with the banter
you're all used to, but perhaps these are shorter threads than you're used
to seeing -- things were late enough already :) Have a lot of fun this
summer!
-------------------------------------------------------
(?)Important: Apache install problem
From Keith Richard
Answered By Faber Fedor, Ben Okopnik, Heather Stern
Respected sir,
(!) [Faber] Actually, that would be "sirs" since we more than one and
you'd have to include a...uh...a "sir-ess" for Heather. Or is that
"sir-ette"? Someone wanna help me out here?
(!) [Ben] I think that's Yiddish for "trouble".
(!) [Faber] Or is that "sir-ette"?
(!) [Ben] That's a heavy-duty battery company. Striking out there,
Faber...
(!) [Faber] Someone wanna help me out here?
(!) [Ben] "Ma'am". :)
(!) [Heather] We're an entire Answer Gang; "Respected Answerfolk" would
satisfy Ms.Emily Post.
Your subject needs work, telling people your stuff is "important" around
here implies that the other 200 querents aren't. Or something like that.
(!) [Ben] I'm sure the querent has read the
bad
subject FAQ
and would never simply waste electrons on gratuitous self-promotion like
that - so it should definitely be taken seriously!
(!) [Heather] Now taking on the style of the original Answer Guy, if I'm
gonna turn on the flamethrower I'm not gonna turn it back off until I've
toasted you some marshmallows to go with that scorch mark. Asbestos
clothing is available on sale at the gift shop on your way out :)
-------
(?) I was trying to install apache*.rpm in my system.
(!) [Heather] Gentle Querent,
Could you be more specific? There are several not terribly compatible
RPM based distros. In fact that could well be your problem, putting a
redhat RPM on a SuSE box, a SuSE RPM on a Mandrake box, a conversion
from (package type here) to rpm via alien ... uh, there's just too many
variables.
So it really does matter, exactly which RPM you are trying to install,
and which distro you are trying to install it onto.
Maybe looking into its control tidbits by running mc (midnight
commander) and looking into the RPM would reveal what distro it was
built for.
Frankly most of the sysadmins I know who feel Apache is very important
to their setup reach for upstream sources and build their own, so they
know for sure it's up to date and which options it has been built with.
They feel it's important to know exactly what version of software they
are depending on, that it's happy with the rest of the libraries which
already exist on exactly this server.
In fact some folk I know keep the general opinion that as soon as a
package manager gives you trouble, it's time to Use The Source, Luke.
Reach out with your feelings, know the build system around you. All
things are connected... errr, linked, after you run ./Configure, make,
(if you're paranoid) make test, make install.
(?) But there is a message like "error dependencies:
libmm.so.11 is needed to install apache*
(!) [Faber] You need to install the file libmm.so. Where do you get the
file libmm.so? I don't know, it's not on my system.
(!) [Heather] libmm is a rather basic and popular library. Probably
you'll end up updating many basic parts before long, Perhaps you have an
RPM for the right distro, but a later revision of it.
I've tried upgrading a system via RPMs parts at a time like that.
There's a reason my fellow techies call it "dependency hell".
(!) [Faber] So let's go to google. Type in "libmm.so" and the very first
link says:
RPM resource libmm.so.1. Provided by. mm-1.1.3-9.i386, A shared memory
library.
See that "Provided by"? That tells you what program provides libmm. In
this case, it's the package mm (they should all be that obvious!).
(?) What commands I will need to complete my job?
(!) [Faber] Where do you get this package? Well, you didn't tell me
which distribution or version of Linux you're running. I would suggest
either looking on your CDs or clicking on the link from google.
(!) [Heather]
(a correct RPM for your exact distro revision)
(any supporting RPMs it also needs)
rpm -Uvh (rpmname) (rpmname2) (rpmname3)
Don't type those parentheses, mind you, just the actual filenames...
--or--
ftp to get the sources
follow the build instructions inside the Apache tarball
Don't be too surprised if you might need to make sure you have some
specific libraries present to complete your build; and the matching .h
files, often found in (some library name)-devel RPMs.
(?) Looking for your early reply.
(!) [Heather] A willingness to poke around apache.org and read up about
installation might be handy too.
You are most welcome to read past issues of the Linux Gazette online to
get familiar with general installation tricks.
Winning the TAG lotto is not guaranteed, nor is timeliness of answer,
usefulness of answer, or keeping the flamethrower lit while you're out.
We're all volunteers here. But, if that's not ok, we could point you
toward some consulting resources. Ask nicely, and some of the gang who
do consulting might give you their own rates.
-------------------------------------------------------
(?)couple of questions regarding printing on Linux, pls
From Sony Lloyd
Answered By Ben Okopnik, Heather Stern, Kapil Hari Paranjape
(?) Sorry to distrurb
(!) [Ben] Do not disturb sleeping dragons, for you are crunchy and good
with ketchup. :) This is The Answer Gang - we _like being "disturbed"
(several of us here could be described as having that attribute
permanently set...) - at least if you pose an interesting question. So,
fear not but approach our saurine majesties, small human creature.
(Anybody seen that BBQ sauce bottle?)
(!) [Heather] No, but I think I saw a pizza heading into the hoard room.
(?) I would have couple of questions regarding printing on Linux, pls: (1)
I see "lpd" loading at my system boot up, but my kernel is not compiled
with support for "lp" (confirmed by absence of "lp" from outputs of "cat
/proc/devices").
(!) [Ben] "/proc/devices" does not necessarily reflect whether you have
parallel port support available or not; in fact, mine does not contain
an "lp" entry after startup, and yet I can happily print via my parallel
port. The thing that makes the difference (and loads the modules as
necessary) is the "kmod" option in the kernel: my "lp" module will be
automatically loaded whenever I need it. Until then, "/proc/devices"
will not have it listed.
(?) How come lpd loads without linux being set for printing + with lp
ports not detected by linux?
(!) [Ben] "lpd" is a piece of software that "catches" and outputs print
jobs; that's all it does. If it is linked in your "/etc/rc*d"
directories, it will load on startup. It does not care whether you've
set up a parallel port or not; it just does what it's been told to do,
which is start. If you have not configured the system corectly, and try
to _use "lpd", it will log an error message - just as it should.
One of the major differences between Wind0ws and Unix is that, rather
than large monolithic programs that do many things at once (the Wind0ws
way), Unix uses "small sharp tools" - programs that do one thing well
and can be easily connected to other programs. In this philosophy, a
printing daemon does not (and should not) check for the presence or
absence of ports on startup - that's not its job. What if, for example,
I decide not to load the "lp" module until I want to print? Should "lpd"
fail to load?
(?) (2)I have an old linux 2.0.30 with no xwindow.
(!) [Ben] The 2.0 kernels are perfectly capable of supporting XFree86.
In fact, that's a bit of a misnomer: X only has a tangential
relationship with the kernel - there's nothing in the kernel (as far as
I know, anyway) that is required specifically for running it.
(!) [Heather] Yes, there is; the feature is called "sockets". But, if
you disable it there are a lot of things that won't work.
Depending on how much your monitor hates you, you might need
framebuffers, and that's a newer kernel feature.
(!) [Kapil] Actually, inter-process-communication was required for X to
first be ported to Linux. In other words Linux pre-ipcs (in the old
version 0.12 days possibly upto version 0.98?) could not run X.
(!) [Ben] True, but then you may as well say that if you had no kernel,
you couldn't run X either... 2.0.x is way after that. In practice, if
you have a $GENERIC_KERNEL, X will work just fine.
(?) Simply want to set it to pint to a canon bjc80 printer via parrallel
port. I did recompiled the kernel for "lp" support, but do not know what
to do next.
(!) [Ben] Well, the thing to do _first would have been reading the
Printing-HOWTO (and perhaps the Printing-Usage-HOWTO) at
;. It's always a good idea to look for a HOWTO
when you are experimenting with a new and unfamiliar Linux subsystem.
(?) For now, the lp ports is not detected yetby my linux system.
(!) [Ben] You don't really know that, actually. One way you can tell -
it's a little subtle, actually - is by using the "file" utility and its
"-s" (special file) option. The messages will vary a bit depending on
whether the printing daemon is loaded or not.
# Starting with printing daemon loaded
ben@Fenrir:~$ file -s /dev/lp0
/dev/lp0: can't read `/dev/lp0' (Device or resource busy).
ben@Fenrir:~$ file -s /dev/lp1
/dev/lp1: can't read `/dev/lp1' (No such device or address).
See the difference? Here it is with the daemon stopped:
ben@Fenrir:~$ su -c '/etc/init.d/cupsys stop'
Password:
Stopping CUPSys: cupsd.
ben@Fenrir:~$ file -s /dev/lp0
/dev/lp0: file: read failed (Input/output error).
ben@Fenrir:~$ file -s /dev/lp1
/dev/lp1: can't read `/dev/lp1' (No such device or address).
Different message, but still understandable.
(?) So how to manually (step by step, with no x window utility -- simply
from command line) configure my printer.
(!) [Ben] Read the above HOWTOs; they'll walk you through the process.
Feel free to let us know if you run into any problems you can't solve.
-------------------------------------------------------
(?)New hard drive
From Rodrigues, Joseph
Answered By Thomas Adam, Ben Okopnik, John Karns, Anita Lewis
Hello answer gang,
I have recently installed an additional IDE hard drive on my system. I
previously had 1 IDE HD and one CDRW each on its on controller. After
adding the new HD, I changed the old HD and CDRW to one controller, HD
master, CDRW slave, and put the new HD on the other controller.
(!) [Thomas] Right...
(?) The Bios sees the new drive, Linux sees the new drive as /dev/hdc, but
looking at the output of the dmesg command I see that it can not find a
driver for it, thus I can't use Fdisk (that's what led me to look at dmesg
in the first place) to partition it and create the file systems that I
want.
(!) [John] Hmm, this seems odd. It would seem that if the BIOS sees it,
then the Linux kernel should too.
(!) [Thomas] OK -- I think I see what you're saying, yet your
terminology is completely out :) Linux does not use the concept of
"drivers" -- there are no drivers to run hardware from. The "/dev/"
directory lists all the devices you'll need, and it is the kernel that
interfaces with them.
(!) [Ben] My question to Joseph would be, what exactly are you seeing in
the output of "dmesg" that is causing you to draw this conclusion? The
conclusion is incorrect, but the message is still important.
(?) To answer your question first. This is what I get from the kernel
hdc: bad special flag:0x03
hdc: driver not present
(!) [Thomas] Also, Fdisk is used to detect information about your drive
-- NOT to set formatting (as per the DOS equivilent). There is also a
program called "cfdisk" which is quite good, but still very much
experimental and not something that I would recommend you use. So, we'll
stick to the CLI :)
(?) I know how to partition, create the file system, unfortunately I don't
know how to install the driver.
(!) [Thomas] The fact that you knew how to partition your drive is
irrelivent in this case, as I did not know that (my powers of Telepathy
aren't all that great :) ). Furthermore, I always like to try and expand
on answers so that it makes for good General Readership (tm), rather
than answering you specifically.
(!) [Anita] /dev/hdc is likely the right device, since you said you have
it on the 2nd IDE and I assume as master.
First step is to partition it. Does 'fdisk /dev/hdc' do anything? Does
the drive get found? Hopefully it does and you can then use that program
to partition the drive making at least 1 partition. You don't have to
partition it all at this time if you only want to use part of it, but
remember that if you want more than 4 partitions, you will have to make
one of them extended. With fdisk, just do 'm' like it says in order to
see the commands.
After you have a partition, you can run mkfs on it --
(!) [Thomas] To create a filesystem (ext2), issue the following
command.... (as root):
mke2fs -c /dev/hdc1
(I have used the "-c" switch here to scan the drive for badblocks -- since
this is a new drive, this is a good idea).
(?) I looked in the /dev directory and I do see a /hdc device, so I really
don't know how to proceed. I checked some of the howtos, but they all
assume that the driver is installed and you can access the drive.
(!) [Thomas] Yes, /dev/hdc is listed, and yes, the kernel does detect
it.
(!) [John] /dev has pre-existing entries for all commonly used devices.
The distro (SuSE 7.1) on this laptop has hda through hdl. That doesn't
indicate that I have 12 ide devices connected.
(!) [Ben] Erm... well, we don't actually know what the kernel detects.
He may have a bad drive, some totally weirdo IDE controller that Linux
won't recognize (hey, anything is possible), ...
(!) [John] A couple of weeks ago I had a similar experience with a new
Western Digital drive. It seems that for want of a few more pennies,
they are now making hd's which are not equipped with a full IDE
controller. The one I was dealing with required that it be connected
with the jumper on the drive set to "cs" or cable select. When set to
master (why on earth they would bother to put a "master" jumper position
on a drive that didn't support it is beyond me) I could do nothing with
the drive. (Although I seem to remember that the BIOS coudn't see it
either, which seems to differ from what the querent is seeing.) It was
recognized fine when set to cable select, but that was it. I thought the
drive had died on me and brought it back for an exchange.
Perhaps it was defective, I don't know. But I resisted the retailer's
argument that I just continue using it with the limitation of using it
as "cs", and insisted on exchanging it for different device. It was also
a different brand - Maxtor. The WD had a very "cheap" feel to it, much
lighter than any hd I can ever remember handling, which also made me
somewhat suspicious.
(!) [Ben] ...or a piece of buttered toast plugged into the slot. :)
(!) [John] I believe this may be the case here.
(!) [Ben] Oh - so it is a piece of buttered toast? Right on! I didn't
think my ESP was working that well, but if you insist... :)
(!) [Thomas] With a bit of ginger marmalade, yum.
(!) [Heather] Alrighty then :D Can we get a photo for "Really Weird
Things That Can Manage To Run Linux If You Really Try" ? If not, I will
have to see if we can get a picture of that for a future HelpDex!
Buttered toast as a drive must really cook.
(!) [Thomas] :-) Indeed, but sometimes, Ben unless we are told otherwise
certain "stock" assumptions have to be made :)
(!) [Thomas] The reason why you cannot access it is because it has not
been formatted yet in a manner that the kernel can understand.
(!) [Ben] This, however, is highly probable.
(!) [John] He should be able to run fdisk on an unformatted disk. My
guess is that the kernel makes an inquiry to the drive controller, and
the platter contents should completely irrelevant. One possible
exception would be a drive having some surface damage in a critical area
such as sector zero - which might cause a problem for the controller ...
or the controller being defective.
(!) [Ben] I meant "access" as in "read/write files, etc."; I'm pretty
certain Thomas did as well.
(!) [Thomas] Yes, I did, Ben. The querent already stated it was seen in
"dmesg" output. I was more concerned with ensuring that the drive could
read/write files, etc.
(!) [Ben] "fdisk" does indeed deal specifically with the IDE control
mechanism rather than the platter contents (other than track 0); it
shouldn't care about the contents at all, although some broken DOS
versions (I'm thinking specifically of OnTrack, lo these many years ago)
could be made to hang with a maliciously crafted MBR - there was a
mutated version of the "Jerusalem" virus that was plain murder on
Compaqs. I met several "techies" who mistakenly threw away perfectly
good HDs because of it.
(!) [Thomas] SeaGate drives were also notorious for falling on their
backs with their legs twitching after about a year or so.....
(back to the querent) What exactly do you have planned for this new
drive, once it has been formatted? I strongly suggest (no -- I am
TELLING you) :) to read the following...:
http://linuxgazette.com/issue83/okopnik.html
(!) [Ben] Thanks, Thomas. My own advertising service, how cool!
(!) [Thomas] Now -- that said, and you have your drive formatted, you'll
now want an entry for it in /etc/fstab so that it can be mounted, so....
mkdir /some_new_mount_point
(change the above as necessary -- that'll ensure a mount point for the
new drive. Some people like to have their devices mounted under "/mnt" -
it is up to you).
now -- you up until now you haven't said exactly which filesystem you'll
be using. I stuck with ext2 as it is the de-facto for kernels < 2.4.xx.
If you're running a kernel version >=2.4.17 and it has ext3 support
compiled in (it ought to) -- then you can use ext3. To do that though,
you'll need to run....
tune2fs -j /dev/hdc1
to create the journal. If you know you're not using ext3 then skip that.
So...now edit /etc/fstab, and add an entry similar to this....
/dev/hdc /mp ext2 defaults 1 1
You'll have to change the above as necessary (and make sure that you
change ext2 -> ext3 or vise-versa). Then when that is done, save the
file.
now issue the command....
mount -a
(!) [Ben] Since "defaults" in the above includes the "auto" option, this
partition will be mounted automatically the next time you boot. However,
the last two numbers which you show as "1 1" take a little more than
just a blind copy-and-paste. From the "fstab" man page:
The fifth field, (fs_freq), is used for these filesystems by the dump(
8) command to determine which filesystems need to be dumped. If the
fifth field is not present, a value of zero is returned and dump will
assume that the filesystem does not need to be dumped.
The sixth field, (fs_passno), is used by the fsck( 8) program to
determine the order in which filesystem checks are done at reboot time.
The root filesystem should be specified with a fs_passno of 1, and other
filesystems should have a fs_passno of 2. Filesystems within a drive
will be checked sequentially, but filesystems on different drives will
be checked at the same time to utilize parallelism available in the
hardware. If the sixth field is not present or zero, a value of zero is
returned and fsck will assume that the filesystem does not need to be
checked.
So, "fs_passno" will depend on exactly what this partition is. Not a
huge thing, but it should be done right.
(!) [Thomas] and that'll mount your new drive. If you "cd" to the
mount-point, you'll find a "Lost+Found" directory there, which is used
during "fsck"'s for any lost inode data that can be found.
(?) System information:
Suse 8.1
HDs both are WD just different models
(?) Any help would be appreciated.
-------
(?) Here is what I did. I booted from the CDR and went into rescue mode.
From there I had no problem accessing hdc and using fdisk to partition the
disk as I wanted.
I installed linux on it, and copied my current home partition from
/dev/hda to /dev/hdc. (yes I could have copied all the file systems, but I
am not proficient enough to work out all the details, this way it took me
less time to do it, and less aggravation. I may still want to do this as
an exercise later).
I went back to booting from /dev/hda and was still having the same problem
with hdc when booting from hda. I just got a response from someone which I
think may have hit the nail on the head, and I quote: "Just a thought: do
you have a line such as "hdc=ide-scsi" somewhere in your LILO (or GRUB or
whatever) configuration? Trying to treat the hard drive as an ATAPI device
might cause the problem you're seeing." sent by John-Paul Stewart.
As a matter of fact I do. As soon as I get home today, I will check the
parameters from by boot setup from hdc against hda and correct the hda
parameters.
I am hopeful that this may be the cause of the problem.
Thanks for all your help.
Joe.
-------------------------------------------------------
(?)Redhat 7.2 upgrade to Redhat 9.1 without booting from a disk
From Nick Pringle
Answered By Thomas Adam, Faber Fedor, Dan Wilder, Heather Stern, Ben
Okopnik
(?) Hi
I rent a Redhat 7.2 system installed on a host machine 'in a galaxy far,
far away'. I want to upgrade to Redhat 9.1 but cannot follow the prescribe
route because I cannot boot from floppy or cd the way Redhat say to do the
upgrade. I can, however, always boot from an emergency ram disk and then
mount the real system to work on it. When booted via this emergency RAM
disk I have full net access and have ftp access to all the Redhat CDs etc.
Is there any way of running the upgrade procedure 'manually'?
Regards Nick Pringle
(!) [Thomas] How do you mean by installing it "manually"? The RH
installer allows you the choice of doing either http/ftp/cdrom install,
depending on what you choose.
Are you trying to say then, that you want to only upgrade certain
packages on your system (N.B. This is not a good idea, since as this is
a higher version than the version currently on your system, trying to
upgrade certain packages leads to "dependency hell"). Cf:
http://www.linuxgazette.com/issue71/tag/3.html
Could you try and provide more details. Thanks.
(!) [Faber] That's the problem he has, Thomas. The box is "far far away"
and he can't just "put in the CD, boot the machine, and choose
http/ftp/cdrom" after selecting his language mouse and keyboard. He
isn't at the machine.
So he ants to know if he can manually start the installation process,
i.e. not reboot the machine.
It's a good question and I haven't found a solution yet.
(!) [Dan] Maybe he shoulda used Debian.
I routinely upgrade Debian systems one major release level via an ssh
login. So far not "far far away", but without touching the box being
upgraded, yes. At one point I upgraded a running web server this way,
with only a fifteen minute interruption to its services.
Which proves such a thing can be done. Now whether other distributions
allow for it ...
(!) [Heather] I am a lot more careful about letting debian do its
automagical thing if I know I can't get over to that machine and whack
it one. There have been a few times in my life, when playing with
Debian's idea of the leading edge, I took too careless a leap and added
that "b" noise to the word. Ouchie.
Essentially, I use a curses-mode selector such as aptitude. I update,
and I pick some very basic stuff to make sure the raw parts are
definitely grabbed first. This generally means dpkg, debconf and its
related parts - libc and things having to do with login, such as the
shell, pam, and so on. All in all I've usually done 4 or 5 small sets of
critical utilities (not always members of "base" - sometimes in admin,
or related to the actual purpose of the system). Before anything whose
improper behavior would give me the willies, I use dpkg-repack to save
an instance of its current bits before I allow it to upgrade. Yeah, I
bail out of the selector a lot. But when I finally am happy with how
perl settled in, I won't need to worry about the rest of it.
(!) [Ben] 1) Install the system on a local machine; configure and tweak.
2) Copy everything across (FTP or whatever) to a new partition on the
remote machine.
3) Carefully adjust the remote "lilo.conf" to boot the new
"installation" on the next reboot.
Anybody see a problem with this scenario? Sure, some stuff is going to
require tweaking afterwards - but that's true of any new install.
(!) [Dan] Sure sounds a lot like the safest upgrade procedure for what's
still, for quirky reasons, my favorite distribution ... Slackware.
It'd be nice if the ever-so-much featureful RH could do better.
-------
(?) Hi. Thanks for the prompt reply.
(!) [Thomas] All part of the service, sir!
(?) I recently upgraded the machine in my office from Redhat Linux 7.2 to
Redhat 9.1. No trouble. I just booted from the CD, click the options in
graphics mode and did the upgrade. :-) So I've been through the process on
a PC I can touch.
(!) [Thomas] Yep, installers are becoming easier and easier. I am sure
people like Jim Dennis and Ben Okopnik (resident on this list) will
remember the days of black and white, and having to use the "friendly"
program fdisk :) :)
(?) But I am trying out a hosting package provided by a company called
Oneandone. It's a very attractive solution because they have fast access,
I won't have to use a machine and UPS of my own and they are cheap! It's
$50 a month. It's on www.oneandone.co.uk as a Root Server 1. I live in
Britain,
(!) [Thomas] Well, well, well. I live in England too :) Small world, eh?
I have heard of oneandone, but never really looked any further, until
now :)
(?) I think the machine might be in Germany but I'm not actually sure. I
don't know what sort or hardware they run but I am simply unable to detect
if it is virtual in any way. Even the hardware reporting at boot time says
it is a real machine. I imagine they have racks of tiny machines with only
processor, memory, Realtek netcard and a hard disk.
(!) [Thomas] A reasonable assumption.
(?) I get Redhat Linux 7.2 installed but very limited support! When I
point out to them that Redhat 7.2 becomes obsolete in November they agree
it will but cannot upgrade my Server package. To use the service I really
need to know I can upgrade at some time.
(!) Hmm, I am going to be picky here and say that NO Linux distibution
becomes obsolete. Yes, some of the programs will be at a lower version
number than some more recent ones, but as long as it works and does what
you want it to do -- there is no reason to upgrade at all. That is
perhaps the selling point of Linux for me over Microsoft --- you don't,
nor are you forced to upgrade. If it works, keep it. Heck, I know some
people who are still running kernel 1.x.x :)
(!) [Heather] I'll have to agree; I've safely let systems lay with only
the important service ever being updated, behind a nice little firewall
whose kernel is updated more often.
(?) I am truly remote. I ONLY have SSH access. I cannot boot from anything
other than the hard disk of the remote machine. When I upgraded my local
machine in the office I booted with the Redhat CD1 on my local CD drive.
As far as I can see Redhat upgrade requires you to BOOT from either a
floppy or CD. If I could boot from the CD I know I could choose ftp/cd/or
local hard disk but I cannot do the very first step.
(!) [Thomas] I see your problem :) If you have SSH support, then what I
would be inclined to do is try and run a program called "up2date", like
so:
up2date -u
essentially this locates a RH server and updates old packages that you
have on your current remote system with newer ones. It does not though
perform a dist-upgrade. I suppose that you could look at "up2date" as a
very childlike form of Debian's "apt-get".
So, this is a half-way solution to your problem.
Another, perhaps more direct approach is to use the utility "wget"
download the ".iso" files, and mount them on a loopback, such that you
can then issue:
rpm -ivh *.rpm
(?) Sorry to drag it on a bit but I hope the above clarifies the
situation.
If you haven't lost the will to live by now thank you very much for
listening.
Incidentally I agree that partial upgrades and going through each of the
RPMs one at a time will result in "dependency hell" which is why I need a
3rd route.
(!) [Heather] There you have it folks; if anyone has had their own
successes in such distant climes, maybe you'd like to write us an
article someday soon?
----------------------------------------------------------------------
Copyright (c) 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
HTML script maintained by Heather Stern of Starshine Technical Services,
http://www.starshine.org/
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | News Bytes |
| ...making Linux just a little more fun! | By Michael Conry |
+------------------------------------------------------------------------+
Contents:
* Legislation and More Legislation
News Bytes * Linux Links
* Conferences and Events
* News in General
* Distro News
* Software and Product News
Selected and formatted by Michael Conry
Submitters, send your News Bytes items in PLAIN TEXT format. Other formats
may be rejected without reading. You have been warned! A one- or
two-paragraph summary plus URL gets you a better announcement than an
entire press release. Submit items to gazette@ssc.com
----------------------------------------------------------------------
July 2003 Linux Journal
[issue 111 cover image] The July issue of Linux Journal is on newsstands
now. This issue focuses on Hardware. Click here to view the table of
contents, or here to subscribe.
All articles older than three months are available for public reading at
http://www.linuxjournal.com/magazine.php. Recent articles are available
on-line for subscribers only at http://interactive.linuxjournal.com/.
----------------------------------------------------------------------
Legislation and More Legislation
----------------------------------------------------------------------
Patents
It looks like the flawed system of software patenting that has become
entrenched in the United States is on its way to Europe, amidst clarion
calls of "harmonise!" and "encourage innovation!". Not everybody is so
keen. Richard Stallman and Nick Hill have written a brief but thorough
critique of the plans, published in The Guardian. Ultimately, each side
claims that the introduction of software patents will have opposite
effects. Will they encourage innovation or stifle it? Will software
development thrive in a more certain environment, or become bogged down in
a morass of litigation? Ultimately you have to look at the evidence and
make your own mind up. It is this columnist's opinion, however, that if
you look at people like Richard Stallman, and then look at the people
supporting increased patents and asks "who seems to support genuine
innovation?" you will get some way towards the answer.
Arlene McCarthy, a British MEP who has played an important role in the
development of software patenting plans, certainly knows where she stands.
She also knows what those of us who advocate free software should do...
It is time some of the "computer rights campaigners" got real... We have
an obligation to legislate not just for one section of the software
industry who seeks to impose its business model on the rest of industry,
which moreover is not "free", but is actually a different form of
monopoly by imposing a copyright licence system on users.
To be honest, this smacks of the "TINA" doctrine (There Is No Alternative)
promulgated by Margaret Thatcher & Co. during the 1980's. McCarthy does
not even appear to appreciate the irony that the pro-patent lobby seeks to
impose a business model on the rest of industry: a business model based on
government-backed artificial monopolies. The Register has criticised those
that rail against people like Arlene McCarthy as being ineffective, and
ultimately self indulgent. Although the criticisms have some validity,
they are ultimately cheap and convenient rather than insightful, and are
not necessarily a true measure of the reality of opposition. Perhaps a
truer indication of the reality being faced in Europe, and maybe
especially in the UK, is the British Government's handling of the public
consultation with regard to ID cards. In an effort to maintain a result
which could be used to provide positive spin and reduce debate, thousands
of submissions made by members of the public via the STAND.org.uk website
have been amalgamated into a single vote. Clearly the UK Government is not
keen to have the terms of engagement defined by the public, no matter how
flattering we are. The interests that are defining the terms of engagement
are perhaps illustrated by proposals to include biometric data on European
passports.
Nonetheless, there is still work that can be done. Even though The
European Parliament's Committee for Legal Affairs and the Internal Market
(JURI) has voted on a final list of proposed amendments to the planned
software patent directive, the proposals still have to pass the European
Parliament. Following attempts to rush the directive through the
Parliament stage it has been rescheduled to its original date, September
1st 2003. This allows some time for concerned parties to lobby their MEPs,
though with upcoming holidays, there is not as much time as one might
think.
----------------------------------------------------------------------
EULA
Infoworld reports that the US Supreme Court has refused to hear a
reverse-engineering case, thus allowing a lower court ruling to stand. The
lower court ruling was against a company that had imitated a product's
look and feel (as opposed to recreating similar code) in violation of the
product's EULA. The case is significant because it's outside the UCITA
states (Virginia and Maryland, which expressly make EULAs enforceable),
where EULAs are of questionable legal value. But now more valuable,
apparently. The court also apparently accepted the plaintiff's contention
that the defendant "must have" examined more than just the user interface,
with no direct evidence. This case goes back several years, with previous
suits between the companies.
(Analysis by Mike 'Iron' Orr)
----------------------------------------------------------------------
SCO
There is little point in going through the details of the SCO case once
again. Instead, you can peruse the sco.iwethey.org collection of documents
relating to lawsuit. If you want further reading, Eric Raymond has
released an updated version of his SCO vs. IBM position paper which
reflects some of the changes in the case over the past weeks. Hopefully
the doubt surrounding this whole affair will be dispelled soon. As Richard
Stallman has commented, the media bears some blame for the depth of the
FUD generated by this case.
----------------------------------------------------------------------
Linux Links
Linux Focus The E-zine LinuxFocus: has for July/August the following
articles:
* Going 3D with Blender: Very first steps
* A GNUstep "small apps" tour
* Product Review: Textmaker
* IDS - Intrusion Detection System, Part II
* Book Review: Mastering Red Hat Linux 9
* GUI Programming with GTK - part 2
* A 1 Bit Data Scope
* Building an autonomous light finder robot
Some links of interest from the O'Reilly stable:
* Introduction to Netcat "the Swiss Army Knife of networking".
* Eight Questions for George Dyson, Director's Visitor of the Institute
for Advanced Study and a historian.
* Video Playback and Encoding with MPlayer and MEncode.
* Snort Security Holes and Strategies for Safe Network Monitoring.
* Almost 2,000 ephemeral films (industrial, educational, and
advertising) from the early 1900s through the 1960s are available for
free on the Net, thanks to film archivist Rick Prelinger.
* Running Arbitrary Scripts Under CVS.
* Python Success Stories: eight true tales of flexibility, speed, and
improved productivity.
Some interesting links from NewsForge:
* Automatic Astronomy, how computers help spot hard to see phenomena.
* SCO staff join Linux protests.
* Using Slackware as a Live CD.
* Inside the Linux kernel debugger.
* The Brazilian Public Sector to Choose Free Software It has been
reported that the Brazilian public sector plans to migrate from
Windows to Linux on 80% of computers in state institutions and
state-owned businesses.
Some interesting links from Linux Today:
* US Department of Defense rates Open Source [pdf].
* Wired reports that the developing world can benefit from GNU/Linux.
* Forbes magazine on the limitations of Linux.
* Welsh speaking computer users have created their own Linux
distribution.
* Building a DIY TiVo
SSC, publisher of Linux Journal, recently announced the launch of a new
on-line publication, WorldWatch. It offers readers a comprehensive daily
digest of articles from publications around the world about topics
concerning Linux and open-source software.
Modern SCO Executive, apologies to Gilbert and Sullivan. Everybody join in
for the chorus.
Slashdot discussion on the release of Linux 2.4.21
Some Linux Journal links:
* Linux Journal has reported on a Finnish study on FLOSS (free/libre and
open-source software) in developing countries.
* Working with OpenSSH.
* VGA for the Ultimate Linux Box.
Interesting Linux Weekly News look at Open-Source content management
systems. Many of the talkbacks have good information too.
Mike Crawford has written a fine selection of articles on the general
topic of quality in Free Software. Titles include Why We Should All Test
the New Linux Kernel , Using Test Suites to Validate the Linux Kernel and
more.
----------------------------------------------------------------------
Upcoming conferences and events
Listings courtesy Linux Journal. See LJ's Events page for the latest
goings-on.
------------------------------------------------------------------------
O'Reilly Open Source July 7-11, 2003
Convention Portland, OR
http://conferences.oreilly.com/
------------------------------------------------------------------------
12th USENIX Security August 4-8, 2003
Symposium Washington, DC
http://www.usenix.org/events/
------------------------------------------------------------------------
HP World August 11-15, 2003
Atlanta, GA
http://www.hpworld.com
------------------------------------------------------------------------
Linux Clusters Institute August 18-22, 2003
Workshops Yorktown Heights, NY
http://www.linuxclustersinstitute.org
------------------------------------------------------------------------
LinuxWorld UK September 3-4, 2003
Birmingham, United Kingdom
http://www.linuxworld2003.co.uk
------------------------------------------------------------------------
Linux Lunacy September 13-20, 2003
Brought to you by Linux Alaska's Inside Passage
Journal and Geek Cruises! http://www.geekcruises.com/home/ll3_home.html
------------------------------------------------------------------------
Software Development September 15-19, 2003
Conference & Expo Boston, MA
http://www.sdexpo.com
------------------------------------------------------------------------
PC Expo September 16-18, 2003
New York, NY
http://www.techxny.com/pcexpo_techxny.cfm
------------------------------------------------------------------------
COMDEX Canada September 16-18, 2003
Toronto, Ontario
http://www.comdex.com/canada/
------------------------------------------------------------------------
IDUG 2003 - Europe October 7-10, 2003
Nice, France
http://www.idug.org
------------------------------------------------------------------------
Linux Clusters Institute October 13-18, 2003
Workshops Montpellier, France
http://www.linuxclustersinstitute.org
------------------------------------------------------------------------
LISA (17th USENIX Systems October 26-30, 2003
Administration San Diego, CA
Conference) http://www.usenix.org/events/lisa03/
------------------------------------------------------------------------
HiverCon 2003 November 6-7, 2003
Dublin, Ireland
http://www.hivercon.com/
------------------------------------------------------------------------
COMDEX Fall November 17-21, 2003
Las Vegas, NV
http://www.comdex.com/fall2003/
------------------------------------------------------------------------
Linux Clusters Institute December 8-12, 2003
Workshops Albuquerque, NM
http://www.linuxclustersinstitute.org
------------------------------------------------------------------------
----------------------------------------------------------------------
News in General
----------------------------------------------------------------------
SGI Announces first Altix Customers on Madison
SGI has announced the first of its customers receiving the new Intel
Itanium 2 'Madison' processor in recent sales of the SGI Altix 3000
system. The Altix system combines SGI's fourth generation NUMAflex shared
memory architecture with Intel Itanium 2 processors and the 64-bit Linux
operating system for a uniquely balanced system. Each supercluster node
runs a single Linux operating system image with up to 64 Itanium 2
processors and 512GB of memory. With the new processor immediately
available on Altix systems. Among the first SGI customers to deploy Altix
3000 systems based on the new processors are:
* SARA Computing and Networking Services: 416 Intel Itanium 2 processors
(1.30 GHz, 3M) and 832GB of memory.
* Oak Ridge National Laboratory: SGI Altix 3000 installation running 256
Intel Itanium 2 processors (1.50 GHz with 6MB L3 cache) with 2TB of
system memory and 1.5 TFLOPS of computational power.
* Pacific Northwest National Laboratory: SGI Altix 3000 system powered
by 128 Intel Itanium 2 processors (1.50 GHz, 6MB).
SGI has been doing very well in terms of performance benchmarks with
systems based on the new Itanium 2 processor. The entry-level server
starts at $70,176 (U.S. list) at four processors with up to 32GB of memory
and scales to 12 processors and 96GB of memory.
----------------------------------------------------------------------
Distro News
----------------------------------------------------------------------
Debian
Robert Millan announced that he has managed to get GNU/FreeBSD installed
self-hosting. The kernel runs init, which initialises swap and
filesystems, and spawns 8 nice gettys. He has built a new base tarball
(26.9 MB), with only the minimal utilities plus APT. He has also set up an
APT repository for his GNU/FreeBSD packages, including the toolchain and
XFree86. ( Courtesy Debian Weekly News)
--------------
Will Debian survive Linux's popularity? Discussed on Slashdot.
----------------------------------------------------------------------
Knoppix
Quantian Scientific Computing Environment. Dirk Eddelb&yyml;ttel announced
Quantian, a remastered version of Knoppix. Quantian differs from Knoppix
by adding a set of programs of interest to applied or theoretical workers
in quantitative or data-driven fields. It still retains all of Knoppix'
impressive features in terms of automatic configuration of virtually all
available hardware features. If there is sufficient interest, this project
may become a Debian subproject. (Courtesy Debian Weekly News)
--------------
Slashdot report on the new bootable arcade emulator (MAME) with hardware
detection from Knoppix.
----------------------------------------------------------------------
Libranet
Extremetech has reviewed Libranet 2.8
----------------------------------------------------------------------
Red Hat
The Register has reported that Red Hat has turned a profit once again.
----------------------------------------------------------------------
SuSE
SuSE has announced the availability of SuSE Linux Desktop, which it claims
is the first Linux desktop for large IT infrastructures.
----------------------------------------------------------------------
Software and Product News
----------------------------------------------------------------------
Eset Unveils NOD32 Antivirus For Linux Mail Servers
Eset Software, a provider of Internet software security solutions,
announced today the debut of NOD32 Antivirus for Linux Mail Servers,
extending NOD32 antivirus detection software to the Linux email server
environment. The MTA (Mail Transport Agent)-independent solution runs on
most Linux distributions including RedHat, Mandrake, SuSE, Debian, and
others; it also supports Sendmail, Qmail, Postfix, and Exim, among other
email server software.
----------------------------------------------------------------------
VariCAD
VariCAD has announced the recent release of its mechanical CAD system -
VariCAD 9.0.1.0. The compact CAD package includes many tools for 3D
modeling and 2D drafting, libraries of mechanical parts, surface
development (unbending), calculations of standard mechanical components,
tools for working with bills of materials (BOM) and title blocks. It is a
compact system featuring all necessary tools that the mechanical
engineering designers need to make their work comfortable and effective.
The system is distributed "fully-loaded", with all features included. Free
30-day trial version is available for download from http://www.varicad.com
----------------------------------------------------------------------
Linux Distro Distribution in Ireland
JMC SOFTWARE has announced that it has been appointed Irish distributer
for FreeBSD as well as the Linux distributions from Red Hat, SuSE and
Mandrake. These are available throughout Ireland at www.thelinuxmall.com
or tel 01 6291282.
----------------------------------------------------------------------
Big Medium
Big Medium is claimed to be an easy-to-use tool for Linux and other UNIX
systems that allows non-technical staff to edit and maintain websites
while providing a wide range of features. The software is a suite of Perl
scripts designed for web servers running the UNIX operating system,
including Linux, Mac OSX, Solaris and FreeBSD. Big Medium is licensed for
$129, and a free online demo is available.
----------------------------------------------------------------------
Zend Performance Suite released/PHP scripting
Zend Technologies, the designers of the PHP scripting engine, has
announced the release of the Zend Performance Suite (ZPS) 3.5. Zend
Performance Suite enables both enterprises and service providers to
overcome scalability issues and to deliver high performance Web sites,
increasing server throughput by up to 30 times - without upgrading their
hardware.
--------------
Zend has also announced that it will team with Sun Microsystems to
initiate specification for PHP and web scripting access to Java
technology.
----------------------------------------------------------------------
QuickUML 1.1
Excel Software has begun shipping QuickUML 1.1 for Windows and Linux.
QuickUML is an object-oriented design tool that provides tight integration
and synchronization of a core set of UML models. QuickUML Linux 1.1 adds
improved font handling, an enhanced Contents view for class and object
models, and a toolbar to access code manager commands. QuickUML Linux has
the same features as the Windows edition and also uses QuickHelp to
provide context sensitive application help.
--------------
Excel Software has also announced the availability of QuickHelp for Linux.
QuickHelp is a development tool for creating and deploying application
help to Mac OS 9, Mac OS X, Windows 95 through XP and virtually all Linux
distributions.
----------------------------------------------------------------------
Copyright (c) 2003, Michael Conry. Copying license
http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, June 2003
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | HelpDex |
| ...making Linux just a little more fun! | By Shane Collinge |
+------------------------------------------------------------------------+
These images are scaled down to minimize horizontal scrolling. To see a
panel in all its clarity, click on it.
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.
[BIO] Part computer programmer, part cartoonist, part Mars Bar. At night,
he runs around in a pair of colorful tights fighting criminals. During the
day... well, he just runs around. He eats when he's hungry and sleeps when
he's sleepy.
----------------------------------------------------------------------
Copyright (c) 2003, Shane Collinge. Copying license
http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | Ecol |
| ...making Linux just a little more fun! | By Javier Malonda |
+------------------------------------------------------------------------+
The Ecol comic strip is written for escomposlinux.org (ECOL), the web site
tha t supports, es.comp.os.linux, the Spanish USENET newsgroup for Linux.
The strips are drawn in Spanish and then translated to English by the
author.
These images are scaled down to minimize horizontal scrolling. To see a
panel in all its clarity, click on it.
[cartoon]
[cartoon]
[cartoon]
[cartoon]
[cartoon]
All Ecol cartoons are at tira.escomposlinux.org (Spanish),
comic.escomposlinux.org (English) and http://tira.puntbarra.com/
(Catalan). The Catalan version is translated by the people who run the
site; only a few episodes are currently available.
These cartoons are copyright Javier Malonda. They may be copied, link ed
or distributed by any means. However, you may not distribute
modifications. If you link to a cartoon, please notify Javier, who would
appreciate hearing from you.
----------------------------------------------------------------------
Copyright (c) 2003, Javier Malonda. Copying license
http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | select() on Message Queue |
| ...making Linux just a little more fun! | By Hyouck "Hawk" Kim |
+------------------------------------------------------------------------+
Introduction
When using message queue with socket or any other file descriptor based
unix facilities, the most inconvenient thing is message queue does not
support select() system call. So usually unix programmers solve the I/O
multiplexing issue in a simple but ugly way like
while(1)
{
select on socket with timeout;
...
wait on a message queue with IPC_NOWAIT
}
Certainly, the above implementation is ugly. I don't like it. Another
solution might be adopt multi-threading. But here in this article, I want
to show you a funny approach, that is, implementing a new system call
called msgqToFd(). I'm not trying to provide you with full-fledged,
bug-free kernel implementation. I just want to present you my experiment.
This article might be interesting to readers who like to play with
GNU/Linux kernel source.
msgqToFd() - A new non-standard system call
Here is its signature.
int msgqToFd(int msgq_id)
It returns a file descriptor corresponding to a message queue , which can
be used with select().
If any error happens, it returns -1.
An application can use the call like
...
q_fd = msgqToFd(msgq_id);
while(1)
{
FD_ZERO(&rset);
FD_SET(0, &rset);
FD_SET(q_fd, &rset);
select(q_fd + 1, &rset, NULL, NULL, NULL);
if(FD_ISSET(0, &rset))
{
...
}
if(FD_ISSET(q_fd, &rset))
{
r = msgrcv(msgq_id, &msg, sizeof(msg.buffer), 0, 0);
...
}
}
How select() works
A file descriptor is associated with a file structure. In the file
structure, there is a set of operations supported by this file type called
file_operations. In the file_operations structure, there is an entry named
poll. What the generic select() call does is call this poll() function to
get status of a file (or socket or whatever) as the name suggests.
In general, the select() works like
while(1)
{
for each file descriptor in the set
{
call file's poll() to get mask.
if(mask & can_read or mask & can_write or mask & exception)
{
set bit for this fd that this file is readable/writable or
there is an
exception.
retval++;
}
}
if(retval != 0)
break;
schedule_timeout(__timeout);
}
For detailed implementation of select(), please take a look at
sys_select() and do_select() in fs/select.c. of standard kernel source
code.
Another thing required to understand is poll_wait(). What it does is put
current process into a wait queue provided by each kernel facilities such
as file or pipe or socket or in our case, message queue.
Please note that the current process may wait on several wait queues by
calling select()
long sys_msgqToFd(long msqid)
The system call should return a file descriptor corresponding to a message
queue. The file descriptor should point to a file structure which
contains file_operations for message queue.
To do that, sys_msgqToFd() does
1. with msqid, locate the corresponding struct msg_queue
2. allocate a new inode by calling get_msgq_inode()
3. allocate a new file descriptor with get_unused_fd()
4. allocate a new file structure with get_empty_filp()
5. initialize inode, file structure
6. set file's file_operations with msgq_file_ops
7. set file's private_data with msq->q_perm.key
8. install fd and file structure with fd_install()
9. return the new fd
Please take a look at msg.c and the accompanying msg.h provided with this
article. See also sys_i386.c
msgq_poll()
msgq_poll() implementation is pretty simple.
What it does is
1. With file->private_data, which is a key for a message queue, locate
the corresponding message queue
2. put current process into the message queue's wait queue by calling
poll_wait()
3. if the message queue is empty (msq->q_qnum == 0), set mask as
writable( this may cause some arguments but let's forget this for
now). If not, set mask as readable
4. return the mask
Modification of existing message queue source code
To support poll() on a message queue, we need to modify existing message
queue source code.
The modification includes
1. adding a wait queue head to struct msg_queue, which will be used to
put a process into for select(). Also the wait queue head should be
initialized when a message queue is created. Please take a look at
struct msg_queue and newque() in msg.c.
2. Whenever a new message is inserted to a message queue, a process
waiting on the message queue( by calling select()) should be awaken.
Take a look at sys_msgsnd() in msg.c.
3. When a message queue is removed or it's properties are changed, all
the processes waiting on the message queue(by calling select()) should
be awaken. Take a look at sys_msgctl() and freeque() in msg.c.
4. To allocate a new inode and file structure, we need to set up some
file system related
5. s for VFS to operate properly. For this purpose, we need additional
initialization code to register a new file system and set something
up. Take a look at msg_init() in msg.c.
All the changes are "ifdef"ed with MSGQ_POLL_SUPPORT. So it should be easy
to identify the changes.
File System Related Stuff
To allocate a file structure, we need to set up the file's f_vfsmnt and
f_dentry properly. Otherwise you'll see some OOPS messages printed our on
your console. For VFS to work correctly with this new file structure, we
need some additional setup, which is already explained briefly.
Since we support only poll() for the file_operations, we don't have to
care about every detail of the file system setup code. All we need is a
properly set up f_dentry and f_vfsmnt. Most of the related code is copied
from pipe.c.
Adding a new system call
To add a new system call, there two things need to be done.
The first step is add a new system call in kernel level, which we already
did (sys_msgqToFd()).
In the GNU/Linux kernel, all system V IPC related calls are dispatched
through sys_ipc() in arch/i386/kernel/sys_i386.c. sys_ipc() uses call
number to identify a specific system call requested. To dispatch the new
system call properly, we have to define a new call number(which is 25) for
sys_msgqToFd() and modify sys_ipc() to call sys_msgqToFd(). Just for your
reference, please take a look at arch/i386/kernel/entry.S in the standard
kernel source and sys_ipc() in sys_i386.c provided with this article.
The second step is add a stub function for user level application.
Actually all the system call stub functions are provided by GLIBC. And to
add a new system call, you have to modify the GLIBC and build your own and
install it. Oh hell, NO THANKS!!!. I don't want to do that and I don't
want you to do that either. To solve the problem, I did some copy and
paste from GLIBC. If you look at user/syscall_stuff.c provided with this
article, there is a function named msgqToFd(), which is the stub for
msgqToFd() system call.
What it does is simply
return INLINE_SYSCALL(ipc, 5, 25, key, 0, 0, NULL);
Here is a brief description for the macro.
ipc : system call number for sys_ipc(). ipc is expanded as __NR_ipc,
which is 117.
5 : number of arguments for this macro.
25 : call number for sys_msgqToFd()
key : an argument to sys_msgqToFd()
INLINE_SYSCALL sets up the arguments property and invokes interrupt 0x80
to switch to kernel mode to invoke a system call.
Conclusion
I'm not so sure about practical usability of this modification. I just
wanted to see whether this kind of modification was possible or not.
Besides that, I want to talk about a few issues needed to be addressed.
1. If two or more threads or processes are accessing a message queue and
one process is waiting on the message queue with msgrcv() and another
is waiting with select(), then always the former process/thread will
receive the new message. Take a look at pipelined_send() in msg.c.
2. For writability test, msgq_poll() sets the mask as writable only if
the message queue is empty. Actually we can set the mask as writable
if a message queue is not full and there will be no big difference.
But I chose the implementation for simplicity.
3. Let's think about this scenario.
1. A queue is created
2. A file descriptor for the queue is created
3. The queue is removed
In this kind of case, what should be do? A correct solution would be
close the fd when the queue is removed. But this is impossible since a
message queue can be removed by any process which has a right to do
that. This means a process removing the message queue may not have a
file descriptor associated with the message queue even if the message
queue is mapped to a file descriptor by some other process.
Additionally, if the same queue (with the same key) is created again,
the mapping will be still maintained.
4. Efficiency problem. All the processes waiting on the wait queue by
calling select() will be awaken when there is a new message.
Eventually only one process will receive the message and all the other
processes will go to sleep again.
5. No support for message type. Regardless of message type, if there is
any message, the select() will return.
Bugs and Improvements
DIY :-)
Source Code
msg.c Modified message queue implementation
msg.h Header file for message queue
sys_i386.c Modified for the new system call
user/Makefile Makefile to build test program (rename from
Makefile.txt to Makefile)
user/syscall_stuff.c Stub function for msgqToFd()
user/msg_test.h Header for msgqToFd()
user/msgq.c Test program source
user/msgq2.c Another test program
I used GNU/Linux kernel 2-4-20 on x86 for this experiment.
To build a new kernel with this modification, I suggest you should copy
msg.c to ipc/msg.c
msg.h to include/linux/msg.h
sys_i386.c to arch/i386/kernel/sys_i386.c
and build and install it!!!!
Before running the test programs, please be sure to make key files:
touch .msgq_key1
touch .msgq_key2
----------------------------------------------------------------------
Copyright (c) 2003, Hyouck "Hawk" Kim. Copying license
http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | Linux to Save the Health of the World |
| ...making Linux just a little | By Janine M Lodato |
| more fun! | |
+------------------------------------------------------------------------+
Abstract:In Order to deliver a workable and affordable telemed system we
need to break away from all conventions:
* replace telephone-line based connectivity with data over power-line
technology; this is readily available all over the world and at a
lower cost.
* replace the Windows environment with Linux due to its lower cost and
higher reliability, simplicity
* replace the keyboard- and and mouse-based human-to-machine interface
with voice recognition.
This will be done by a group of dedicated and courageous teams outlined
below, but they do need one more consortium member: a Linux implementation
team. This article is written with the aim to find such Linux team.
I. Statement of Purpose
TelMedForte is a project formed as a consortium of:
* a large MasterASP-BSP (=Application Services Provider / Business
Services Provider)
* experienced entrepreneurs and professionals located in the Silicon
Valley of California
* a VAR for Siemens medical group
* a VAR for Siemens data-over-power-line division
* a small high tech comapany offering medical e-record systems
The goal of TelMedForte is to become the leader in the design, integration
and implementation of advanced telemedicine solutions enabled by power
line and wireless broadband technologies, and through leveraging existing
wireline facilities.
The financial support for all the various telemedicine implementations
efforts in the rural communities will be based on USDA Loan/Grant made
available to rural communities under the USDA large solicitation: Distance
Learning and Telemedicine Loan and Grant Program.
In each community the above consortium teams up with the local medical
facilities to jointly offer the community-wide collaborative telemed
system and distance-learning system.
TelMedForte will also provide integrated solutions for the medical
facilities of each community by integrating telemedicine applications and
the facilities management applications of MasterASP-BSP. This will bring
significant additional revenues to MasterASP-BSP.
II. Background and Market
For TelMedForte, telemedicine is the examination, diagnosis and treatment
of health-care consumers by health care service providers when direct,
face-to-face interaction is inconvenient, costly, inefficient or
ineffective for the consumer or purchaser of healthcare services. Also
included is telemonitoring of medical devices. Broadband deployment and
adoption has proceeded at a blinding rate over the past few years.
However, there is market of at least 30 million health care consumers in
the U.S. alone without broadband access. Several key factors make
TelMedForte's offering the right technology at the right time:
* Gartner forecasts the number of power line broadband lines in service
worldwide will be 200,000 by end-2003 and will hit the 1 million mark
by 2006 with equipment revenues exceeding $200M.
* Cost of bandwidth continues to decrease. The cost to go by each home
using power line broadband communications is significantly less than
Cable or DSL. There are a number of medium-voltage power-line
broadband trials that have been successful and the technology is
available today. Also, the regulatory environment is favorable.
TelMedForte provides a 5-10x improvement in performance, at
substantially lower costs, over available power line carrier (PLC)
solutions in the market.
* Payer issues have been resolved using the MedStage e.Health web-based
telemedicine solution from Siemens Medical Health Services
Corporation.
* Videoconferencing equipment and applications have become affordable,
and standardized. Use of videoconferencing for home health care has
demonstrated that provider resources can be managed more efficiently
while also improving quality of care.
* Sharing of medical images has been made easier through the use of
Siemens syngoA(R) technology
* Technology is available for telemonitoring using Siemens Hot Key
wireless modules
* Additional applications will also be included such as electronic
medical records (Sensitron), e-visit (Rao) etc
III. TelMedForte Solutions
TelMedForte will create integrated solutions including multimedia
communications to regional and teaching hospitals, enhanced communications
with medical devices (including Siemens "syngo" enabled and Hot Key),
delivery of broadband to the homes of patients using 216 Mbps
medium-voltage networks (over the existing power grid) and HomePlug
standard in-home power line modems, provision of telemedicine software
solutions (MedStage), in-home devices to support the provision of
telemedicine, plus in-home and mobile telemonitoring of medical devices
using Hot Key or other wireless modules.
IV. TelMedForte Technology
The TelMedForte system includes components to distribute a high capacity
connection over the medium-voltage grid to multiple end-user gateways
located at or near neighborhood transformers. Consumer premises are linked
into the network with either low-voltage HomePlug, Bluetooth or 802.11x
modems or a combination of them.
This power line-wireless combination is useful as a ubiquitous network for
in-home and mobile medical device monitoring (e.g., using Hot Key GPS with
the appropriate wireless modules).
Alternative power line broadband technologies are also starting to appear
in the market. These are all based on traditional balanced-line approaches
and operate in the 2-40MHz frequency ranges. These solutions are
inherently limited in performance in terms of network capacity and
susceptibility to noise interference, compared to the TelMedForte system.
V. Competition
Siemens is already in the telemedicine market with MedStageA(R), but they
are not going after the Rural communities which will be the major market
for TelMedForte. In fact TelMedForte will be a VAR for Siemens. Existing
Cable and DSL access technologies are only infrastructures not as much
competition to the TelMedForte solution. In fact they can be a supplier of
MAN services to TelMedForte in each community. TelMedForte with HomePlug
and wireless solutions beats both in terms of performance - higher
bandwidth, symmetric operation, low-latency - and cost - 2-5x lower cost
in terms of installed infrastructure.
VI. Strategic Relationships
As a VAR of Siemens solutions, TelMedForte will integrate solutions from a
number of different Siemens divisions including MedStage from Med, Hot Key
and wireless module technology from ICM, and low-voltage Powerline modems
from Efficient Networks (a Siemens company). Broadband access is an
enabler to video and image based telemedicine applications. In order to
effectively market telemedicine in rural communities, TelMedForte will
leverage government and other grant funding and loan programs for
equipment and broadband deployment. TelMedForte will also provide
integrated solutions for the medical facilities of each community by
integrating telemedicine applications and the facilities management
applications of MasterASP-BSP. TelMedForte will add additional value to
the System Integrator (SI) and VAR value chains by leveraging the
expertise of its management team in product marketing and support. These
elements converge to create a powerful mixture of enabling technologies,
expertise and implementation of new products and services. An example of
the new generation of services and the strategic VAR relationships capable
of adapting and developing new services at Layers 6-7 into vertical
markets include TelMedForte forming localized consortia with:
* Local health care providers
* Regional and teaching hospitals
* Application Service Providers currently servicing the local health
care providers
* Local power companies; and,
* Internet Service Providers (ISPs) to provide seamless end-to-end
telemedicine solutions. Siemens Global Services will also be a VAR of
TelMedForte's packaged telemedicine solutions.
Although market estimates vary widely, one estimate of the global demand
for telehealth services is $1.125 trillion. TelMedForte will support these
offerings with a comprehensive set of business case information and tools
to help its customers develop winning strategies for their broadband
telemedicine service offerings.
----------------------------------------------------------------------
Copyright (c) 2003, Janine M Lodato. Copying license
http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | My Open Radio |
| ...making Linux just a little more fun! | By Mark Nielsen |
+------------------------------------------------------------------------+
1. Introduction
2. Setting up Apache
3. Using Grip to Rip
4. The Python Script
5. Play the List
6. Conclusion
7. References
Introduction
I am sick of playing cds. Half the songs on a cd suck. I don't like
switching cds in and out. I am sick of commercial radio with their stupid
mid-life crisis hosts who try to appeal to teenagers by trying to act like
them (grow up). I like to listen to music (from cds) or shows on NPR as
background noise while I program. I decided to develop a way to make my
computer play songs and NPR shows as though it were a radio. This will
eliminate cds and commercial radio shows. I want my computer to play this
stuff in a random order. The first thing I wanted to do was rip songs from
my cds and play them in a random order. The second thing (which is not in
this article) was to download a playlist of all the shows I like to listen
to on NPR (I hope someday NPR will accept my offer to develop playlists
(as my donation) for their listeners).
For now, I am keeping things really really simple. In the future, I plan
to add playlists, give songs weight, put stuff into a PostgreSQL
databases, add accounts, etc.
I am very lazy. So lazy, I didn't bother to look long at the various
web-based mpeg organizers of your favorite songs. I just wanted something
to spit out 200 songs in a random order so that it simulates a radio
station. I first had to rip the songs and then write a simple Python
script to split out a playlist.
Configuring Apache
On your Linux server, find your html root directory for your httpd server.
On some systems, this is located at "/var/www/html". Assuming that it is,
do this:
cd /var/www/html
mkdir audio
Now copy all of your mp3, rm, wav, or other audio files into the directory
"/var/www/html/audio". NOTE: Do you not use your web server for anybody
but yourself. Only you may listen to these songs or you may get into
copyright problems. Contact an attorney for issues regarding legal issues.
To start your webserver, usually you can do this "service httpd start". If
that doesn't work, then look at the documentation that came with your
Linux distribution to figure out how to start and stop the web service.
Usually the default web server on most Linux systems with be Apache.
Using Grip to Rip
After looking at many programs, Grip seemed to be the easiest to use to
rip songs from a cd. It organizes the songs by author and album. Nice.
Here are the steps I used to configure Grip.
1. Download and install "LAME" from http://www.mp3dev.org. Be aware of
any patent issues.
cd /usr/local/src
lynx --source http://twtelecom.dl.sourceforge.net/sourceforge/lame/lame-3.93.1.tar.gz > lame-3.93.1.tar.gz
tar -zxvf lame-3.93.1.tar.gz
cd lame-3.93.1
./configure --prefix=/usr/local/lame
make install
ln -s /usr/local/lame/bin/lame /usr/bin/lame
2. Start Grip.
3. Configure Grip. Under the "Config" menu, do this.
Click on Encode, choose 'lame' as the encoder. Where is says "Encode
File Format" make sure you specify the directory "/var/www/html/audio"
as the base directory. Mine looked like this
'/var/www/html/audio/%A/%d/%t_%n.mp3'.
4. Click on "Tracks" in the top menu and select the tracks you want to
rip.
5. Click on "Rip" in the top menu and then click on "Rip + Encode".
The Python Script.
Put this python script at "/var/www/cgi-bin/playlist.py". Execute this
command after putting it there "chmod 755 /var/www/cgi-bin/playlist.py".
After you have properly installed this python script (please use Python
2.2) and you know it works right, you might have to change the url from
127.0.0.1 to the ip address of your computer for the network so that other
computers in your house can play the music as well.
#!/usr/bin/python
# Make sure this line above is the first line of this file.
### Copyright under GPL
## import the python modules we need.
import os, re, time, random
## Setup some variables. You can change these varaibles for your needs.
Home = "/var/www/html/audio"
Url_Base = "http://127.0.0.1/audio"
Song_Max = 200
List_Type = "mpegurl"
## DO NOT CHANGE ANYTHING BELOW HERE UNLESS YOU ARE A PYTHON GEEK.
File_Match = re.compile('[{mp3}{rm}{wav}{ogg}{mpeg}]$')
Home_Re = re.compile('^' + Home)
List_Types = {'smil':'application/smil', 'mpegurl':'audio/x-mpegurl'}
#---------------------------------------
## This function will go through and get the absolute path of all files
## that match. It is a recursive method.
def Dir_Contents(Item=""):
Final_List = []
if Item == '': return ('')
elif os.path.isdir(Item):
List = os.listdir(Item)
for Item2 in List:
Item3 = Item + "/" + Item2
Temp_List = Dir_Contents(Item=Item3)
for Item4 in Temp_List: Final_List.append(Item4)
elif os.path.isfile(Item):
if File_Match.search(Item): return([Item])
else: return([])
return (Final_List)
#--------------------------
List = Dir_Contents(Home)
List_Copy = List
## Randomize how many times we call random.
Secs = int(time.strftime('%S')) * int(time.strftime('%H')) * int(time.strftime('%M'))
for i in range(0,Secs): random.random()
## Randomly get one file at a time until there is none left.
New_List = []
while (len(List_Copy) > 0):
Position = random.randint(0,len(List_Copy) - 1)
New_List.append(List_Copy[Position])
del List_Copy[Position]
## Redo the urls in the list.
Urls = []
for Item in New_List:
## For each item, strip the Home directory prefix and preappend the url.
Url = Url_Base + Home_Re.sub('', Item)
Urls.append(Url)
## If we are greater than the number of songs we want to listen to,
## cap it off. Bonus points if you can figure out how many songs
## are in this array when Song_Max = 200.
if len(New_List) > Song_Max: New_List = New_List[0:Song_Max]
## If the idiot who edited this file has an invalid list type....
if not List_Types.has_key(List_Type): List_Type = 'mpegurl'
Content_Type = List_Types[List_Type]
### Now print out the content.
print "Content-Type: " + Content_Type + "\n\n"
if List_Type == 'mpegurl':
for Url in Urls: print Url
elif List_Type == 'smil':
print "\n\n \n"
for Item in Urls: print " "
print " \n\n"
else:
for Url in Urls: print Url
#------------------------------------------------------------------------
# Open Radio version 1.0
# Copyright 2003, Mark Nielsen
# All rights reserved.
# This Copyright notice was copied and modified from the Perl
# Copyright notice.
# This program is free software; you can redistribute it and/or modify
# it under the terms of either:
# a) the GNU General Public License as published by the Free
# Software Foundation; either version 1, or (at your option) any
# later version, or
# b) the "Artistic License" which comes with this Kit.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See either
# the GNU General Public License or the Artistic License for more details.
# You should have received a copy of the Artistic License with this
# Kit, in the file named "Artistic". If not, I'll be glad to provide one.
# You can look at http://www.perl.com for the Artistic License.
# You should also have received a copy of the GNU General Public License
# along with this program in the file named "Copying". If not, write to the
# Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
# 02111-1307, USA or visit their web page on the internet at
# http://www.gnu.org/copyleft/gpl.html.
Play the List.
Personally, I use Real Player. I tried to use xmms, but it didn't work for
some reason (with the mpegurl list). Real Player accepts both smil and
mpegurl, so I just use it. I would like to switch to some free GPLed
player instead someday.
Just type this into your browser, Real Player, or whatever other player
you are using "http://127.0.0.1/cgi-bin/playlist.py".
Conclusion
This little setup is perfect for me. In the future, I want to create
accounts, playlists, keeping track of which songs haven't been played yet,
give a song weight, and a bunch of others things. For now, I am finished
with this and will move onto making a playlist of my favorite NPR shows.
I am big ideas of where this could lead. Since I have a lot of unfortunate
experience with Flash, Real Player, Windows Media Player, and Javascript,
it seems like something could develop here. I heard a lot of stuff about
internet radio stations, but it seems like none of them are really
approaching the market right. They seem to be stuck in the old days of
radio. They need to move forward and not be constrained by the media
giants (legally). It seems like the internet radio stations don't see the
big picture. For now, I am just going to develop my own little radio for
myself and maybe do something with it for real later.
References
1. http://www.nostatic.org/grip/
2. http://www.apache.org
3. http://www.python.org
4. http://service.real.com/help/library/earlier.html
5. If this article changes, it will be available here
http://www.tcu-inc.com/Articles/34/open_radio.html
[BIO] Mark Nielsen works at Crisp Hughes Evans. In his spare time, he
writes articles relating to Free Software (GPL) or Free Literature (FDL).
Please email him at articles@tcu-inc.com and put in the subject "ARTICLE:"
or the message will be deleted and not even looked at -- to stop spammers.
----------------------------------------------------------------------
Copyright (c) 2003, Mark Nielsen. Copying license
http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | Setting up the mail subsystem in Linux |
| ...making Linux just a little | By Ben Okopnik |
| more fun! | |
+------------------------------------------------------------------------+
The mail system is - or can be - one of the more complex parts of the
Linux jigsaw puzzle. True, for a lot of folks, it's not complex at all:
they install Netscape, enter their POP/SMTP server names, username, and
password, and off they go... unless, of course, they want to use anything
else that utilizes the mail system - such as writing a script that will
mail them a report when the file system is almost full, or deciding that
they'd like a different Usenet news reader, or even try to mail in a bug
report using the "bug" or "bashbug" utilities. Ooops...
In Unix, mail is closely integrated with the OS itself, and not having it
working properly is like driving a car with a deflated tire. Things work
kinda OK, as long as you don't get up above 5mph, or shift your weight to
the wrong side - or even let your girlfriend get in for a ride. As soon as
you do, problems crop up by the dozen. A working mail system - like a net
connection - is one of the basic assumptions in any Unix-flavored OS. What
I'd like to do here is show you at least one example of a working mail
system, which you can then adjust or interpolate to your own setup; the
important part is being aware of the pieces that need to be operational in
order for this to happen.
THE PIECES THAT MAKE UP THE WHOLE
The mail system consists of three somewhat loosely defined pieces: the MUA
(Mail User Agent), which is the software you use to read and compose your
mail, the MTA (Mail Transfer Agent), usually an SMTP server, although some
directly-invoked programs are also in use, and a retrieval program (Some
SMTP servers also contain POP functionality, but a stand-alone program. is
more common.) The MUA can be pretty much anything you want: it's only a
front end, meaning that you can use whatever you prefer once the other two
pieces are working. You can even stick with Netscape if you like! For the
other two in this example, I'll use Exim - a well-known MTA, and Eric S.
Raymond's "fetchmail", probably the most-commonly used retrieval utility
in the world.
GETTING YOUR STUFF
There's not much complexity in setting up "fetchmail". Pretty much all
that's required is creating a file called ".fetchmailrc" in your home
directory and specifying your POP-related information. As an example,
here's what mine looks like:
----------------------------------------------------------------------
# I want to log all retrievals to "/var/log/mail.*"
set syslog
# Set stuff that's the same for everybody
defaults protocol pop3,
timeout 300,
nokeep,
fetchall,
mda "procmail -f-"
# Get mail from my ISP
poll "pop.happybruin.com",
user "fuzzybear"
password "wouldnt_you_like_to_know";
# Grab it from my other account
poll "pop3.bearsden.com",
user "ben-fuzzybear",
password "shhh_its_a_secret";
----------------------------------------------------------------------
Just a quick overview of the above - it's very well covered in the
"fetchmail" man page: I'm retrieving mail from two different accounts.
Since I have a somewhat flaky Net connection (a wireless modem), I've set
"fetchmail" to time out any given connection after 5 minutes (300
seconds). I've also told it to delete all the mail on the server once it
is retrieved ("nokeep"), to ignore the "already read" flag and get all the
mail that's waiting ("fetchall"), and to use "procmail" to do some header
processing for me ("mda ..."). The last is not needed for everyone, but
some broken SMTP servers "forget" to include a so-called "Envelope-from"
header, and this fixes it. Other than that, I think everything is pretty
self-explanatory.
There are generally two ways in which fetchmail is launched. It can be
started as one of the "init" scripts (this is useful if you have an
always-on connection), or from your "/etc/ppp/ip-up.d" script (more common
for dial-up connections.) Usually, you get to choose this during
"fetchmail" setup. Each user can also start it manually, as a one-time run
(simply by typing "fetchmail" at the command line) or as a daemon that
will poll the mailboxes at a set interval (I like to do it this way, with
a "fetchmail -d 600" which polls at 10 minute intervals. This can also be
defined in ".fetchmailrc".)
"fetchmail" is far more flexible and powerful than this simple situation
shows. Suffice it to say that it can do almost any kind of mail retrieval,
with any valid mail protocol; unless you have some truly complicated
lash-up - and if you did, you'd know about it - it will work for you. Of
course, if you have your own preferred retrieval agent, that's fine too.
LOOKING AT THE BIG PICTURE
Setting up your SMTP server doesn't necessarily have to be much more
complex than the above - but it definitely should take a lot more thought.
The main thing to consider is, where do you fit into the Net? For those of
you who have never had to think of yourself on that large of a scale,
that's yet another piece of the puzzle: the reality is that most of the
Net is built up of little pieces - such as the computer that you're
sitting in front of right now. Your ISP is just another node of the Net;
true, you're connecting through their routers, but once you have
connected, you're just as much a part of the Net as they are - and
consequently, responsible for making sure that your little piece works in
harmony with the rest.
(One of the security-related RFCs I read recently - I don't recall exactly
which one - mentions that possibly 50%+ of the mail servers connected to
the Net are misconfigured to some degree. Pretty scary statistic... but
also quite a testament to the reliability and flexibility of the Net mail
system. All of this points up the need for all of us to contribute to the
Good Side of the Force - by doing our part.)
For a lot of us, the situation is very simple: a desktop machine, a single
ISP, and no need to do our own SMTP - at least any more than is necessary
to forward all our mail to the ISP's SMTP server. In this situation,
pretty much any MTA will do - and there's next to no tweaking necessary,
except for address rewriting. Just answer the questions that you're asked
at setup time, and - bingo, you're off and running. However, this part of
the system is a little more "touchy" when it comes to changes: if you use
more than one ISP, or want to do anything else even slightly different
from the basics, it's going to take a little configuration... and this is
where most folks run afoul of the mail beast.
----------------------------------------------------------------------
"sendmail"'s configuration file looks like someone's been banging their
head on the keyboard. And after looking at it... I can see why!
-- Anonymous
----------------------------------------------------------------------
"sendmail.cf" has been responsible for more than one sysadmin being
dragged away while tied down to a stretcher and foaming at the mouth. It's
an ugly creature... and the configuration file that it's created from
isn't any prettier. I've detailed a bit of its workings back in LG#58
(Configuring Sendmail in RedHat 6.2, or My Adventure in the Heart of the
Jungle); at this point, I have the twitching mostly under control, and the
doctors tell me that I can stop taking these little pills in another year
or so...
Seriously, this is a decision point. If your system's network connection
is going to change in major ways (ISP, host name, from a dial-up to a
full-on Internet host) more than once or twice, you should consider doing
your own SMTP. As an example, I do my own because I travel for a living,
and use lots of different ISPs (dial-up, wireless, cable modems in hotel
rooms, etc.) in many different system configurations. Doing it this way
means never having to worry about what anyone else's mail setup is like,
or having to configure anything when I move from one system to another - a
great convenience. In other words, doing your own is not a big deal to
implement, but it is a critical decision that should be made based on your
own needs. I find the "do-it-yourself" approach to be far more flexible,
powerful, and hassle-free in all cases where the environment is anything
other than static.
SMTP SETUP OPTIONS
So, at this point, we've defined two typical SMTP setups:
1) Delegate everything except address rewriting (that has to be done
locally.) The ISPs SMTP server (the "smarthost", from our perspective)
takes care of all the routing. This is a good way to go when you have a
static setup that's not likely to change, especially through a major ISP
with a good reliability record (well, we can dream, can't we?)
2) Do everything ourselves. This has a number of benefits, including
bypassing unreliable ISP mail services and the ability to instantly see if
your mail has actually been delivered to the host on the other end (a few
years ago, my ISP held some of my emails for over a week, and discarded a
batch of them without notifying me. That was what initially started me
doing this...)
Generally, this is a decision that's made during the installation of the
MTA (Mail Transfer Agent). There's not much to it; in the case of Exim,
you're given five choices, of which only the first two really apply here
(the "eximconfig" program runs during the installation, or may be re-run
manually at any time):
----------------------------------------------------------------------
You must choose one of the options below:
(1) Internet site; mail is sent and received directly using SMTP. If your
needs don't fit neatly into any category, you probably want to start
with this one and then edit the config file by hand.
(2) Internet site using smarthost: You receive Internet mail on this
machine, either directly by SMTP or by running a utility such as
fetchmail. Outgoing mail is sent using a smarthost. optionally with
addresses rewritten. This is probably what you want for a dialup
system.
...
----------------------------------------------------------------------
Note that these two choices fit the above two options: the "do everything
ourselves" approach dovetails into #1, and the "smarthost" version is #2.
"eximconfig" then walks you through a few more questions, one of which is
----------------------------------------------------------------------
...
Which user account(s) should system administrator mail go to?
Enter one or more usernames separated by spaces or commas. Enter
`none' if you want to leave this mail in `root's mailbox - NB this
is strongly discouraged. Also, note that usernames should be lowercase!
----------------------------------------------------------------------
Since you're the one who's configuring the system, I assume you'll also be
the one administering it, so you should direct this to your own username.
If you go the "smarthost" route, you'll be asked for the name of the
smarthost; be sure to enter your ISP's SMTP server name correctly.
THE BELLY OF THE BEAST
Once that's done - and we'll get to what else we need to do in the two
different cases - we need to set up address rewriting. After all, your
email address as seen by the system is "username@host", and unless you
have your own domain, that isn't going to be an Internet-valid address.
Fortunately, with Exim, it's not difficult.
First, we'll edit "/etc/exim/exim.conf", and add the following to the 6th
section ("REWRITE CONFIGURATION"):
----------------------------------------------------------------------
*@localhost ${lookup{$1}lsearch{/etc/email-addresses}\
{$value}fail} Ffsr
--------------------------------------------------------------------------
This will search through the file where the rewriting rules are specified,
and change the addresses as necessary. Note that in some cases,
"exim.conf" will already have a line like this; just make sure that
everything, particularly the "Ffsr" flags (which rewrite the
"Envelope-from", "From:", "Sender:", and "Reply-to:" headers), is correct.
Next, we'll edit - surprise! - "/etc/email-addresses" and insert the
entries for all our users.
----------------------------------------------------------------------
# Root shouldn't be emailing anyone outside, but just in case...
root: happybear@bruins.com
ben: happybear@bruins.com
rivka: sweetie@here.com
linda: babe@westcoast.org
jen: saucy@wench.net
--------------------------------------------------------------------------
That's it. Unlike "sendmail", there are no databases to rebuild; the file
is read "on the fly". One of the reasons I like Exim is because its
conffile is copiously documented with comments. As well,
"/usr/share/doc/exim/spec.txt.gz" is a complete (and very large) manual
that details every bit of the configuration in fine detail.
THE DIFFERENT APPROACHES
If you're going with the "smarthost" option, at this point you're done.
Skip ahead to the "TESTING" section. If you're a do-it-yourselfer like me,
though, there's just a tiny bit more stuff to write: since we're now
responsible for getting the mail to where it's going, we also have to deal
with the situation when the delivery fails (i.e., the receiving host or an
intermediate router is down, we lose the network connection for a moment,
etc.) Most of that behavior is well-defined already, as it is in any
decent MTA, but I've found one thing that reduces "trouble emails" from
Exim (which it will send to you as the administrator) to nearly zero: in
the first section of "/etc/exim/exim.conf", you should add the following:
----------------------------------------------------------------------
auto_thaw = 5m
--------------------------------------------------------------------------
Whenever a message is marked "frozen" (undeliverable) by Exim, this will
"thaw" it (reattempt delivery) after five minutes. Since most failures are
only temporary, this setting manages to "push" mail through almost a
hundred percent of the time, as long as the user and the domain are valid.
Oh, by the way. Now that you're a Big-Time Mail Administrator... :) what
is it, exactly, that you're supposed to do? Not that much, actually.
Decide what to do with problem messages (if Exim notifies you that
something is stuck in the queue, run "mailq" to see what it is and look at
its log file with "exim -Mvl "), add new users to
"/etc/email-addresses", and respond to any problem or spam notifications
by other folks. Read the "exim" man page, just to get familiar with this
beast. That's pretty much it. Experienced large-system mail administrators
may shrink in horror and make warding signs in my direction, but for a
single-machine or a small LAN, the above is pretty much all that's
required. Once properly set up, a mail system is a remarkably trouble-free
and mostly self-correcting sort of creature.
TESTING
Exim has a series of built-in testing modes, one of which is about to come
in very handy. The main thing that we need to test is whether our
rewriting rules work - and that's simple:
----------------------------------------------------------------------
Baldur:~$ exim -brw ben
sender: happybear@bruins.com
from: happybear@bruins.com
to: ben@localhost
cc: ben@localhost
bcc: ben@localhost
reply-to: happybear@bruins.com
env-from: happybear@bruins.com
env-to: ben@localhost
----------------------------------------------------------------------
Test it with a bare username, "user@localhost", and user@your_hostname;
all of these should be properly rewritten. Also, test it with an arbitrary
Internet-valid email address to make sure that it doesn't get changed.
Once all of the above works right, your mail system should be at least
reasonably configured (the folks who set up the various distros do a
pretty good job of the basics, in every case I've seen so far.) Test it
out by sending yourself some mail, and look at the headers; the "From:"
and the "Reply-to:" (if one is defined) should match your Net-valid
address, not just your plain user name. Here's an example (the actual
addresses/IPs have been changed, as in the rest of this article, to foil
spambots. Eat fake address, spammer-slime!):
In the Mutt composition menu:
----------------------------------------------------------------------
From: "Benjamin A. Okopnik"
To: Benjamin Okopnik
Cc:
Bcc:
Subject: Rewrite test
Reply-To:
Fcc: =Sentmail
Mix:
PGP: Clear
--------------------------------------------------------------------------
Note that in the local client, the "From:" address is a local one. You
could also - now that you have a real mail system - simply do it from the
command line as
----------------------------------------------------------------------
mail -s "Rewrite test" happybear@bruins.com
--------------------------------------------------------------------------
Either way - now, we send it off, and when we get it back - presto!
----------------------------------------------------------------------
Date: Tue, 30 Apr 2002 03:47:19 -0400
From: "Benjamin A. Okopnik"
To: Benjamin Okopnik
Subject: Rewrite test
WARNING: Deep Magic in progress.
Ben Okopnik
-=-=-=-=-=-
----------------------------------------------------------------------
If we look at the actual headers (in Mutt, press the "h" key), we'll see
the following:
----------------------------------------------------------------------
From ben Tue Apr 30 03:48:15 2002
Return-Path:
Received: from Baldur (pzw-199-999-99-999.sunbridge.com [199.999.99.999]))
by bruins.com (9.10.3/9.10.3) with ESMTP id g3U7lR45008674
for Tue, 30 Apr 2002 00:47:32 -0700 (PDT)
Received: from ben by Baldur with local (Exim 3.35 #1 (Debian))
id 172SM7-0004nd-00
for Tue, 30 Apr 2002 03:47:23 -0400
Date: Tue, 30 Apr 2002 03:47:19 -0400
From: "Benjamin A. Okopnik"
To: Benjamin Okopnik
Subject: Rewrite test
Message-ID: <20020430074718.GA18398@Baldur>
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.3.28i
Status: U
X-UIDL: 27862
WARNING: Deep Magic in progress.
Ben Okopnik
-=-=-=-=-=-
--------------------------------------------------------------------------
Reading the routing info from the bottom up, Exim got the message from me,
rewrote the header, and bruins.com got it from Exim, so all of that was
done correctly - meaning that what my MTA says is properly recognized by
others. If the email had disappeared, I would check my
"/var/log/exim/mainlog" to see exactly what had been done to it, and
perhaps my queue to see if it's stuck. However, it looks like all the Deep
Magic is good, and everything is working.
WRAP-UP
If you've followed along and made it this far... congratulations. You're
now that much more of a participating Netizen, one of the folks who's
contributed a bit of time and effort to make the Net run a little more
smoothly - and I'm glad to share the IP-space with the likes of you.
Be well, and happy Linuxing!
Ben Okopnik
-=-=-=-=-=-
Ben is a Contributing Editor for Linux Gazette and a member of The Answer
Gang.
picture Ben was born in Moscow, Russia in 1962. He became interested in
electricity at age six--promptly demonstrating it by sticking a fork into
a socket and starting a fire--and has been falling down technological
mineshafts ever since. He has been working with computers since the Elder
Days, when they had to be built by soldering parts onto printed circuit
boards and programs had to fit into 4k of memory. He would gladly pay good
money to any psychologist who can cure him of the resulting nightmares.
Ben's subsequent experiences include creating software in nearly a dozen
languages, network and database maintenance during the approach of a
hurricane, and writing articles for publications ranging from sailing
magazines to technological journals. Having recently completed a
seven-year Atlantic/Caribbean cruise under sail, he is currently docked in
Baltimore, MD, where he works as a technical instructor for Sun
Microsystems.
Ben has been working with Linux since 1997, and credits it with his
complete loss of interest in waging nuclear warfare on parts of the
Pacific Northwest.
----------------------------------------------------------------------
Copyright (c) 2003, Ben Okopnik. Copying license
http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
----------------------------------------------------------------------
+------------------------------------------------------------------------+
| LINUX GAZETTE | Qubism |
| ...making Linux just a little more fun! | By Jon "Sir Flakey" Harsem |
+------------------------------------------------------------------------+
These images are scaled down to minimize horizontal scrolling. To see a
panel in all its clarity, click on it.
[cartoon]
(The tree refers to SCO's logo. The SCO vs Linux lawsuit is covered in
News Bytes.)
All Qubism cartoons are here at the CORE web site.
[BIO] Jon is the creator of the Qubism cartoon strip and current
Editor-in-Chief of the CORE News Site. Somewhere along the early stages of
his life he picked up a pencil and started drawing on the wallpaper. Now
his cartoons appear 5 days a week on-line, go figure. He confesses to
owning a Mac but swears it is for "personal use".
----------------------------------------------------------------------
Copyright (c) 2003, Jon "Sir Flakey" Harsem. Copying license
http://www.linuxgazette.com/copying.html
Published in Issue 92 of Linux Gazette, July 2003
----------------------------------------------------------------------