Thursday, December 16, 2010

CRONTAB

Crontab


Scheduling explained

As you can see there are 5 stars. The stars represent different date parts in the following order:

  1. minute (from 0 to 59)
  2. hour (from 0 to 23)
  3. day of month (from 1 to 31)
  4. month (from 1 to 12)
  5. day of week (from 0 to 6) (0=Sunday)

Execute every minute

If you leave the star, or asterisk, it means every. Maybe that's a bit unclear. Let's use the the previous example again:

* * * * * /bin/execute/this/script.sh

They are all still asterisks! So this means execute /bin/execute/this/script.sh:

  1. every minute
  2. of every hour
  3. of every day of the month
  4. of every month
  5. and every day in the week.

In short: This script is being executed every minute. Without exception.

Execute every Friday 1AM

So if we want to schedule the script to run at 1AM every Friday, we would need the following cronjob:

0 1 * * 5 /bin/execute/this/script.sh

Get it? The script is now being executed when the system clock hits:

  1. minute: 0
  2. of hour: 1
  3. of day of month: * (every day of month)
  4. of month: * (every month)
  5. and weekday: 5 (=Friday)

Execute on workdays 1AM

So if we want to schedule the script to Monday till Friday at 1 AM, we would need the following cronjob:

0 1 * * 1-5 /bin/execute/this/script.sh

Get it? The script is now being executed when the system clock hits:

  1. minute: 0
  2. of hour: 1
  3. of day of month: * (every day of month)
  4. of month: * (every month)
  5. and weekday: 1-5 (=Monday til Friday)

Execute 10 past after every hour on the 1st of every month

Here's another one, just for practicing

10 * 1 * * /bin/execute/this/script.sh

Fair enough, it takes some getting used to, but it offers great flexibility.

Neat scheduling tricks

What if you'd want to run something every 10 minutes? Well you could do this:

0,10,20,30,40,50 * * * * /bin/execute/this/script.sh

But crontab allows you to do this as well:

*/10 * * * * /bin/execute/this/script.sh

Which will do exactly the same. Can you do the the math? ;)

Special words

If you use the first (minute) field, you can also put in a keyword instead of a number:

@reboot     Run once, at startup @yearly     Run once  a year     "0 0 1 1 *" @annually   (same as  @yearly) @monthly    Run once  a month    "0 0 1 * *" @weekly     Run once  a week     "0 0 * * 0" @daily      Run once  a day      "0 0 * * *" @midnight   (same as  @daily) @hourly     Run once  an hour    "0 * * * * 

Leave the rest of the fields empty so this would be valid:

@daily /bin/execute/this/script.sh

Storing the crontab output

By default cron saves the output of /bin/execute/this/script.sh in the user's mailbox (root in this case). But it's prettier if the output is saved in a separate logfile. Here's how:

*/10 * * * * /bin/execute/this/script.sh 2>&1 >> /var/log/script_output.log

Explained

Linux can report on different levels. There's standard output (STDOUT) and standard errors (STDERR). STDOUT is marked 1, STDERR is marked 2. So the following statement tells Linux to store STDERR in STDOUT as well, creating one datastream for messages & errors:

2>&1

Now that we have 1 output stream, we can pour it into a file. Where > will overwrite the file, >> will append to the file. In this case we'd like to to append:

>> /var/log/script_output.log

Thursday, July 15, 2010

Server High Load Troubleshooting

Server High Load Troubleshooting

The first command I run when I log in to the system is uptime:

$ uptime
18:30:35 up 365 days, 5:29, 2 users, load average: 1.37, 10.15, 8.10

See My load average is 1.37, 10.15, 8.10. These numbers represent
My average system load over the last 1, 5 and 15 minutes, respectively. Technically speaking, the load average represents the average number of processes that have to wait for CPU time during the last 1, 5 or 15 minutes.

$top

If the first tool I use when I log in to a sluggish system is uptime, the second tool I use is top. The
great thing about top is that it’s available for allmajor Linux systems, and it provides a lot of useful information in a single screen. top is a quite complex tool with many options that could warrant its own article. For this column, I stick to how to interpret its output to diagnose high load.

CPU-Bound Load

CPU-bound load is load caused when you have toomany CPU-intensive processes running at once. Because each process needs CPU resources, they all must wait their turn. To check whether load is CPU-bound, check the CPU line in the top output:

Cpu(s): 11.4%us, 29.6%sy, 0.0%ni, 58.3%id, .7%wa, 0.0%hi, 0.0%si, 0.0%st


Each of these percentages are a percentage of the CPU time tied up doing a particular task. Again, you could spend an entire column on all of the output from top, so here’s a few of these values and how to read them.

us: user CPU time. More often than not, when you have CPU-bound load, it’s due to a process
run by a user on the system, such as Apache, MySQL or maybe a shell script. If this percentage
is high, a user process such as those is a likely cause of the load.

sy: system CPU time. The system CPU time is the percentage of the CPU tied up by kernel and
other system processes. CPU-bound load should manifest either as a high percentage of user or
high system CPU time.

id: CPU idle time. This is the percentage of the time that the CPU spends idle. The higher the
number here the better! In fact, if you see really high CPU idle time, it’s a good indication that
any high load is not CPU-bound.

wa: I/O wait. The I/O wait value tells the percentage of time the CPU is spending waiting on I/O (typically disk I/O). If you have high load and this value is high, it’s likely the load is not CPU-bound but is due to either RAM issues or high disk I/O.


Tuesday, April 27, 2010

Step by Step Delete Remove Clear Squid Cache on Linux Fedora System.

1.Check Squid Cache Location.

Check squid cache location on your Linux system. the default location may be at /var/spool/squid directory, but you can use command as show on the example below to check squid cache directory on your system.


cat /etc/squid/squid.conf | grep ^cache_dir
Output
cache_dir ufs /var/spool/squid 4000 16 256

2.Stop squid cache proxy server

3.Create directory and move squid cache file.

Create new directory to temporary store the old squid cache files... why we shoud do this??? the answer... to minimize squid proxy downtime. First we need to create directory and then move the old squid cache file to this directory


mkdir /var/spool/squid/squid_cache_old
mv /var/spool/squid/?? /var/spool/squid/squid_cache_old/
mv /var/spool/squid/swap* /var/spool/squid/squid_cache_old/

4.Start squid with new cache

squid -z

5.Start squid service