and add fortune to the last line of your .bashrc if you are using Bash (default in my linux distros ). If you love cows then you can install cowsay pipe the output of fortune to cowsay , add following line to the end of your .bashrc
Wget can be used for crawling and other awesome automated stuff.
and voila, weather for next 3 days displayed elegantly in your terminal.
However, java foils my trick by inspecting /proc/self/cmdline to determine where to load its libraries from, which fails if the binary wasn't named 'bin/java'. Also java execs itself during startup, further complicating matters.
The path to the loader is compiled into the binary as you discovered with your hex editor. You actually got lucky that editing the binary directly worked because both /lib/ld-linux.so.2 and /home/chroot/ld.so are the same length. The lengths of those strings are also in the binary and you can cause subtle problems if you modify the strings directly.
I was always attracted to the world of kernel hacking and embedded systems.
Has anyone got good tutorials (+easily available hardware) on starting to mess with such stuff?
Something like kits for writing drivers etc, which come with good documentation and are affordable?
If you are completely new to kernel development, i would suggest not starting with hardware development and going to some "software-only" kernel modules like proc file / sysfs or for more complex examples filesystem / network development , developing on a uml/vmware/virtualbox/... machine so crashing your machine won't hurt so much :) For embedded development you could go for a small ARM Development Kit or a small Via C3/C4 machine, or any old PC which you can burn with your homebrew USB / PCI / whatever device.
A good place to start is probably Kernelnewbies.org - which has lots of links and useful information for kernel developers, and also features a list of easy to implement tasks to tackle for beginners.
Linux Device Drivers - is written more like a tutorial with a lot of example code, focusing on getting you going and explaining key aspects of the linux kernel. It introduces the build process and the basics of kernel modules.
As suggested earlier, looking at the linux code is always a good idea, especially as Linux Kernel API's tend to change quite often ... LXR helps a lot with a very nice browsing interface - lxr.linux.no
As for doing embedded work I would recommend purchasing one of the numerous SBC (single board computers) that are out there. There are a number of these that are based on x86 processors, usually with PC/104 interfaces (electrically PC/104 is identical to the ISA bus standard, but based on stackable connectors rather than edge connectors - very easy to interface custom hardware to)
The WRT54G is notable for being the first consumer-level network device that had its firmware source code released to satisfy the obligations of the GNU GPL. This allows programmers to modify the firmware to change or add functionality to the device. Several third-party firmware projects provide the public with enhanced firmware for the WRT54G.
For starters, the best way is to read a lot of code. Since Linux is Open Source, you'll find dozens of drivers. Find one that works in some ways like what you want to write. You'll find some decent and relatively easy-to-understand code (the loopback device, ROM fs, etc.)
There's also an O'Reilly book (Understanding the Linux Kernel, the 3rd edition is about the 2.6 kernels) or if you want something for free, you can use the Advanced Linux Programing book ( ). There are also a lot of specific documentation about file systems, networking, etc.
The Linksys NSLU2 is a low-cost way to get a real embedded system to work with, and has a USB port to add peripherals. Any of a number of wireless access points can also be used, see the OpenWrt compatibility page. Be aware that current models of the Linksys WRT54G you'll find in stores can no longer be used with Linux: they have less RAM and Flash in order to reduce the cost. Cisco/Linksys now uses vxWorks on the WRT54G, with a smaller memory footprint.
If you really want to get into it, evaluation kits for embedded CPUs start at a couple hundred US dollars. I'd recommend not spending money on these unless you need it professionally for a job or consulting contract.
I am completely beginner in kernel hacking :) I decided to buy two books "Linux Program Development: a guide with exercises" and "Writing Linux Device Drivers: a guide with exercises" They are very clearly written and provide good base to further learning.
This article is a compilation of several interesting, unique command-line tricks that should help you squeezemore juice out of your system, improve your situational awareness of what goes on behind the curtains of thedesktop, plus some rather unorthodox solutions that will melt the proverbial socks off your kernel.
top is a handy utility for monitoring the utilization of yoursystem. It is invoked from the command line and it works by displaying lots of useful information, includingCPU and memory usage, the number of running processes, load, the top resource hitters, and other useful bits.By default, top refreshes its report every 3 seconds.
But what if you wanted to monitor the usage of your system resources unattended? In other words, let somesystem administration utility run and collect system information and write it to a log file every once in awhile. Better yet, what if you wanted to run such a utility only for a given period of time, again without anyuser interaction?
We have top running in batch mode (-b). It's going to refresh every 10 seconds, as specified by the delay (-d)flag, for a total count of 3 iterations (-n). The output will be sent to a file. A few screenshots:
In general, with static data, this is not a problem. You simply repeat the write operation. With dynamic data,again, this is not that much of a problem. You capture the output into a temporary variable and then write itto a number of files. But there's an easier and faster way of doing it, without redirection and repetitivewrite operations. The answer: tee.
tee is a very useful utility that duplicates pipe content. Now,what makes tee really useful is that it can append data to existing files, making it ideal for writing periodiclog information to multiple files at once.
That's it! We're sending the output of the ps command to three different files! Or as many as we want. As youcan see in the screenshots below, all three files were created at the same time and they all contain the samedata. This is extremely useful for constantly changing output, which you must preserve in multiple instanceswithout typing the same commands over and over like a keyboard-loving monkey.
Did you know that you can log the completion of every single process running on your machine? You may even wantto do this, for security, statistical purposes, load optimization, or any other administrative reason you maythink of. By default, process accounting (pacct) may not be activated on your machine. You might have to startit:
Once this is done, every single process will be logged. You can find the logs under /var/account. The log itself is in binary form, so you will have to use a dumping utility toconvert it to human-readable form. To this end, you use the dump-acct utility.
And there you go, the list of all processes ran on our host since the moment we activated the accounting. Theoutput is printed in nice columns and includes the following, from left to right: process name, user time,system time, effective time, UID, GID, memory, and date. Other ways of starting accounting may be in thefollowing forms:
In fact, starting accounting using the init script is the preferred way of doing things. However, you shouldnote that accounting is not a service in the typical form. The init script does not look for a running process- it merely checks for the lock file under /var. Therefore, if you turn theaccounting on/off using the accton command, the init scripts won't be aware ofthis and may report false results.
When no file is specified, the accounting is turned off. When the command is runagainst a file, as we've demonstrated earlier, the accounting process is started. You should be careful whenactivating/deactivating the accounting and stick to one method of management, either via the accton command orusing the init scripts.
Like pacct, you can also dump the contents of the utmp and wtmp files. Both these files provide login records for the host. This information may becritical, especially if applications rely on the proper output of these files to function.
Being able to analyze the records gives you the power to examine your systems in and out. Furthermore, it mayhelp you diagnose problems with logins, for example, via VNC or ssh, non-console and console login attempts,and more.
Would you like to know how your hard disks behave? Or how well does your CPU churn? iostat is a utility that reports statistics for CPU and I/O devices onyour system. It can help you identify bottlenecks and mis-tuned kernel parameters, allowing you to boost theperformance of your machine.
On some systems, the utility will be installed by default. Ubuntu 9.04, forexample, requires that you install sysstat package, which, by the way, containsseveral more goodies that we will soon review:
Then, we can start monitoring the performance. I will not go into details what each little bit of displayedinformation means, but I will focus on one item: the first output reported by theutility is the average statistics since the last reboot.
c80f0f1006