Monday, 30 July 2012

Installation steps Open-shift client tools

From the command line

If you'd like to use our powerful command line client, the steps below will get you installed and running in minutes.
The OpenShift client tools are packaged as a Ruby Gem. To run OpenShift on your platform, you'll need Ruby 1.8.7 or newer, the ability to install a gem (on Linux this will require root access), and the Git version control tool. More details are available about installing the client tools in the Developer Center.

1. Install the client tools

Fedora and Red Hat Enterprise Linux

  1. Install the OpenShift prerequisites via YUM
    $ sudo yum install rubygems git

    (RHEL 6.2 only) If you are using RHN Classic, you may need to add the RHEL 6.2 Optional Channel in order to install the 'rubygems' package.
    $ sudo rhn-channel --add --channel=rhel-x86_64-server-optional-6
  2. Install the gem:
    $ sudo gem install rhc

Other Linuxes

  1. Install the required packages: Ruby 1.8.7 or newer, rubygems, and git
  2. Install the gem:
    $ sudo gem install rhc



  1. From the Windows Command Prompt install the gem:
    $ gem install rhc



  1. Install the gem:
    $ sudo gem install rhc

To update to the latest version of the client tools, use the gem update command:
$ sudo gem update rhc

2. Setup your environment

Using your OpenShift login and password, run rhc setup to connect to OpenShift and create a unique namespace for your applications.
$ rhc setup

Starting Interactive Setup for OpenShift's command line interface

We'll help get you setup with just a couple of questions.

To connect to enter your OpenShift login (email or Red Hat login id):
The wizard will help you upload your SSH keys so you can communicate with Git, check to see if you are missing any required configuration, and then help you create a domain name. On OpenShift, domain names make up part of your app's url. They are also unique across all OpenShift users so choose wisely and be creative!

3. Create your first application

Now you can create an application.
$ rhc app create -a myapp -t php-5.3
Password: (type... type... type...)
This will create a remote git repository for your application, and clone it locally in your current directory.

4. Make a change, publish

Getting an application running is only the first step. Now you are on the road to making it your own. Here's an example for the php framework.
$ cd myapp
$ vim php/index.php
(Make a change...  :wq)
$ git commit -a -m "My first change"
$ git push
Use whichever IDE or editor works best for you. Chances are, it'll have git support. Even if it doesn't, you're just two simple commands away from glory!
Now, check your URL - your change will be live.

5. Next steps

While this has gotten you started, there is a lot more information out there to really get you going. Check out the following pages for videos, blogs, and tutorials:
Red Hat Open-shift Origin logical view (click to enlarge)
    Red Hat OpenShift Origin logical view

    Friday, 27 July 2012

    Getting Deeper Into Perl


    $ perldoc perlop ( perlop - Perl operators and precedence)..

    Variable Scoping in Perl:

    my provides lexical scoping; a variable declared with my is visible only within the block in which it is declared.

    Blocks of code are hunks within curly braces {}; files are blocks.

    Use use vars qw([list of var names]) or                       our ([var_names]) to create package globals.

    local saves away the value of a package global and substitutes a new value for all code within and called from the block in which the local declaration is made.

    Use the package operator to set the current package.

    Implicitly, there's a package main; at the top of your scripts; that is, unless you explicitly declare a different package, Variables that live in a package are reasonably called "package globals", because they are accessible by default to every operator and subroutine that lives in the same package

    Using packages makes accessing Perl variables sort of like travelling in different circles.

    To satisfy strict 'vars' (the part of strict that enforces variable declaration), you have two options; they produce different results, and one is only available in perl 5.6.0 and later:
    1. our ($foo, $bar) operator (in perl 5.6.0 and above) declares $foo to be a variable in the current package.
    2. use vars qw($foo $bar) (previous versions, but still works in 5.6) tells 'strict vars' that these variables are OK to use without qualification in the current package.
    One difference between our and and the 'older' use vars is that our provides lexical scoping (more on which in the section on my below).

    Another difference is that with use vars, you are expected to give an array of variable names, not the variables themselves (as with our). Both mechanisms allow you to use globals while still maintaining one of the chief benefits of strict 'vars'

    my (and a little more on our)  lexical scoping:

    1. Variables declared with my are not globals, A main use of my is to operate on a variable that's only of use within a loop or subroutine, 

    • A my variable has a block of code as its scope (i.e. the places in which it is accessible).
    • A block is often declared with braces {}, but as far as Perl is concerned, a file is a block.
    • A variable declared with my does not belong to any package, it 'belongs' only to its block
    • Although you can name blocks (e.g. BEGIN, with which you may already be familiar), you can't fully qualify the name of the block to get to the my variable.
    • File-level my variables are those which are declared in a file outside of any block within that file.
    • You can't access a file-level my variable from outside of the file in which it is declared (unless you explicitly return it from a subroutine, for example).

    • local  (dynamic scoping)

    Wednesday, 25 July 2012

    Cisco Router Basics

    Cisco Router Basics:

    Cisco is well known for its routers and switches. I must admit they are very good quality products and once they are up and running, you can pretty much forget about them because they rarely fail. 

    We are going to focus on routers here since that's the reason you clicked on this page ! 

    Cisco has a number of different routers, amongst them are the popular 880 series, 2900 series and 3900 series. 

    Below are a pictures few of the routers mentioned (880 & 2900 series):
    All the above equipment runs special software called the Cisco Internetwork Operating System or IOS. This is the kernel of Cisco routers and most switches. Cisco has created what they call Cisco Fusion, which is supposed to make all Cisco devices run the same operating system.
    We are going to begin with the basic components which make up a Cisco router (and switches) and I will be explaining what they are used for, so grab that tea or coffee and let's get going !
    The basic components of any Cisco router are :
    1) Interfaces
    2) The Processor (CPU)
    3) Internetwork Operating System (IOS)
    4) RXBoot Image
    5) RAM
    6) NVRAM
    7) ROM
    8) Flash memory
    9) Configuration Register

    Now I just hope you haven't looked at the list and thought "Stuff this, it looks hard and complicated" because I assure you, it's less painful than you might think ! In fact, once you read it a couple of times, you will find all of it easy to remember and understand.
    These allow us to use the router ! The interfaces are the various serial ports or ethernet ports which we use to connect the router to our LAN. There are a number of different interfaces but we are going to hit the basic stuff only.
    Here are some of the names Cisco has given some of the interfaces: E0 (first Ethernet interface), E1 (second Ethernet interface). S0 (first Serial interface), S1 (second Serial interface), BRI 0 (first B channel for Basic ISDN) and BRI 1 (second B channel for Basic ISDN).
    In the picture below you can see the back view of a Cisco router, you can clearly see the various interfaces it has:(we are only looking at ISDN routers)
    You can see that it even has phone sockets ! Yes, that's normal since you have to connect a digital phone to an ISDN line and since this is an ISDN router, it has this option with the router. I should, however, explain that you don't normally get routers with ISDN S/T andISDN U interfaces together. Any ISDN line requires a Network Terminator (NT) installed at the customer's premises and you connect your equipment after this terminator. An ISDN S/T interface doesn't have the NT device built in, so you need an NT device in order to use the router. On the other hand, an ISDN U interface has the NT device built in to the router.
    Check the picture below to see how to connect the router using the different ISDN interfaces:

    Apart from the ISDN interfaces, we also have an Ethernet interface that connects to a device in your LAN, usually a hub or a computer. If connecting to a Hub uplink port, then you set the small switch to "Hub", but if connecting to a PC, you need to set it to "Node". This switch will simply convert the cable from a straight through (hub) to a x-over (Node):
    The Config or Console port is a Female DB9 connector which you connect, using a special cable, to your computers serial port and it allows you to directly configure the router.
    The Processor (CPU)
    All Cisco routers have a main processor that takes care of the main functions of the router. The CPU generates interrupts (IRQ) in order to communicate with the other electronic components in the router. The Cisco routers utilise Motorola RISC processors. Usually the CPU utilisation on a normal router wouldn't exceed 20 %.
    The IOS
    The IOS is the main operating system on which the router runs. The IOS is loaded upon the router's bootup. It usually is around 2 to 5MB in size, but can be a lot larger depending on the router series. The IOS is currently on version 12, and Cisco periodically releases minor versions every couple of months e.g 12.1 , 12.3 etc. to fix small bugs and also add extra functionality.
    The IOS gives the router its various capabilities and can also be updated or downloaded from the router for backup purposes. On the 1600 series and above, you get the IOS on a PCMCIA Flash card. This Flash card then plugs into a slot located at the back of the router and the router loads the IOS "image" (as they call it). Usually this image of the operating system is compressed so the router must decompress the image in its memory in order to use it.
    The IOS is one of the most critical parts of the router, without it the router is pretty much useless. Just keep in mind that it is not necessary to have a flash card (as described above with the 1600 series router) in order to load the IOS. You can actually configure most Cisco routers to load the image off a network tftp server or from another router which might hold multiple IOS images for different routers, in which case it will have a large capacity Flash card to store these images.
    The RXBoot Image
    The RXBoot image (also known as Bootloader) is nothing more than a "cut-down" version of the IOS located in the router's ROM (Read Only Memory). If you had no Flash card to load the IOS from, you can configure the router to load the RXBoot image, which would give you the ability to perform minor maintenance operations and bring various interfaces up or down.
    The RAM
    The RAM, or Random Access Memory, is where the router loads the IOS and the configuration file. It works exactly the same way as your computer's memory, where the operating system loads along with all the various programs. The amount of RAM your router needs is subject to the size of the IOS image and configuration file you have. To give you an indication of the amounts of RAM we are talking about, in most cases, smaller routers (up to the 1600 series) are happy with 12 to 16 MB while the bigger routers with larger IOSimages would need around 32 to 64 MB of memory. Routing tables are also stored in the system's RAM so if you have large and complex routing tables, you will obviously need more RAM !
    When I tried to upgrade the RAM on a Cisco 1600 router, I unscrewed the case and opened it and was amazed to find a 72 pin SIMM slot where you needed to attach the extra RAM. For those who don't know what a 72 pin SIMM is, it's basically the type of RAM the older Pentium socket 7 CPUs took, back in '95. This type of memory was replaced by today's standard 168 pin DIMMs or SDRAM.
    The NVRAM (Non-Volatile RAM)
    The NVRAM is a special memory place where the router holds its configuration. When you configure a router and then save the configuration, it is stored in the NVRAM. This memory is not big at all when compared with the system's RAM. On a Cisco 1600 series, it is only 8 KB while on bigger routers, like the 2600 series, it is 32 KB. Normally, when a router starts up, after it loads the IOS image it will look into the NVRAM and load the configuration file in order to configure the router. The NVRAM is not erased when the router is reloaded or even switched off.
    ROM (Read Only Memory)
    The ROM is used to start and maintain the router. It contains some code, like the Bootstrap and POST, which helps the router do some basic tests and bootup when it's powered on or reloaded. You cannot alter any of the code in this memory as it has been set from the factory and is Read Only.
    Flash Memory
    The Flash memory is that card I spoke about in the IOS section. All it is, is an EEPROM (Electrical Eraseable Programmable Read Only Memory) card. It fits into a special slot normally located at the back of the router and contains nothing more than the IOS image(s). You can write to it or delete its contents from the router's console. Usually it comes in sizes of 4MB for the smaller routers (1600 series) and goes up from there depending on the router model.
    Configuration Register
    Keeping things simple, the Configuration Register determines if the router is going to boot the IOS image from its Flash, tftp server or just load the RXBoot image. This register is a 16 Bit register, in other words has 16 zeros or ones. A sample of it in Hex would be the following: 0x2102 and in binary is : 0010 0001 0000 0010.

    How to Update Linux Workstations and Operating Systems

     How to Update Linux Workstations and Operating Systems:

    RPM - RedHat Package Manager

    Although RPM was originally used by RedHat, this package management is handled by different types of package management tools specific to each Linux distribution. While OpenSUSE uses the “zypp” package management utility, RedHat Enterprise Linux (REL),Fedora and CentOS use “yum”, and Mandriva and Mageia use “urpmi”.
    Therefore, if you are an OpenSUSE user, you will use the following commands:
    For updating your package list: zypper refresh
    For upgrading your system: zypper update
    For installing new software pkgzypper install pkg (from package repository)
    For installing new software pkgzypper install pkg  (from package file)
    For updating existing software pkg: zypper update -t package pkg
    For removing unwanted software pkgzypper remove pkg
    For listing installed packages: zypper search -ls
    For searching by file name: zypper wp file
    For searching by patternzypper search -t pattern pattern
    For searching by package name pkg: zypper search pkg
    For listing repositories: zypper repos
    For adding a repository: zypper addrepo pathname
    For removing a repository: zypper removerepo name

    If you are a Fedora or CentOS user, you will be using the following commands:
    For updating your package list: yum check-update
    For upgrading your system: yum update
    For installing new software pkgyum install pkg (from package repository)
    For installing new software pkg: yum localinstall pkg (from package file)
    For updating existing software pkgyum update pkg
    For removing unwanted software pkg: yum erase pkg
    For listing installed packages: rpm -qa
    For searching by file name: yum provides file
    For searching by pattern: yum search pattern
    For searching by package name pkgyum list pkg
    For listing repositories: yum repolist
    For adding a repository: (add repo to /etc/yum.repos.d/)
    For removing a repository: (remove repo from /etc/yum.repos.d/)

    DEB - Debian Package Manager

    Debian Package Manager was introduced by Debian and later adopted by all derivatives of Debian - Ubuntu, Mint, Knoppix, etc. 
    This is a relatively simple and standardized set of tools, working across all the Debian derivatives. Therefore, if you use any of the distributions managed by the DEB package manager, you will be using the following commands:
    For updating your package list: apt-get update
    For upgrading your system: apt-get upgrade
    For installing new software pkgapt-get install pkg (from package repository)
    For installing new software pkgdpkg -i pkg (from package file)
    For updating existing software pkgapt-get install pkg
    For removing unwanted software pkgapt-get remove pkg
    For listing installed package: dpkg -l
    For searching by file name: apt-file search path
    For searching by patternapt-cache search pattern
    For searching by package name pkgapt-cache search pkg
    For listing repositoriescat /etc/apt/sources.list
    For adding a repository: (edit /etc/apt/sources.list)
    For removing a repository: (edit /etc/apt/sources.list)

    Linux System Resource & Performance Monitoring

    Linux System Resource & Performance Monitoring:

    Monitoring the Hard Disk Space

    Use a simple command like:
    df -h
    This results in the output:
    Filesystem                Size          Used         Avail     Use%       Mounted on
    /dev/sda1                 22G          5.0G          16G      24%         /
    /dev/sda2                 34G           23G          9.1G     72%         /home

    This shows there are two partitions (1 & 2) of the hard disk sda, which are currently at 24% and 72% utilization. The total size is shown in gigabytes (G). How much is used and balance available is shown as well. However, checking each hard disk to see the percentage used can be a big drag. It is better that the system checks the disks and informs you by email if there is a potential danger. Bash scripts may be written for this and run at specific times as a cron job.
    For the GUI, there is a graphical tool called ‘Baobab’ for checking the disk usage. It shows how a disk is being used and displays the information in the form of either multicolored concentric rings or boxes.


    Monitoring Memory Usage

    RAM or memory is used to run the current application. Under Linux, there are a number of ways you can check the used memory space -- both in static and dynamic conditions.
    For a static snapshot of the memory, use ‘free -m’ which results in the output:
    free -m                                   total           used       free     shared    buffers     cached
    Mem:                          1998           1896       101          0         59          605
    -/+ buffers/cache:       1231            766
    Swap:                          290             77         213

    Here, the total amount of RAM is depicted in megabytes (MB), along with cache and swap. A somewhat more detailed output can be obtained by the command ‘vmstat’:
    root@gateway [~]# vmstat
    procs   -----------memory-------------       ---swap--   -----io----    --system--  -----cpu------
     r    b   swpd     free        buff  cache       si       so       bi    bo      in     cs    us  sy  id  wa  st
     1   0      0       767932        0        0        0        0       10     3       0     1      2   0   97   0   0
    root@gateway [~]#

    However, if a dynamic situation of what is happening to the memory is to be examined, you have to use ‘top’ or ‘htop’. Both will give you a picture of which process is using what amount of memory and the picture will be updated periodically. Both ‘top’ and ‘htop’ will also show the CPU utilization, tasks running and their PID. Whereas ‘top’ has a purely numerical display, ‘htop’ is somewhat more colorful and has a semi-graphic look. There is also a list of command menus at the bottom for set up and specific operations.
    root@gateway [~]# top

    top - 01:04:18 up 81 days, 11:05,  1 user,  load average: 0.08, 0.28, 0.33
    Tasks:  47 total,   1 running,  45 sleeping,   0 stopped,   1 zombie
    Cpu(s):  2.4%us,  0.4%sy,  0.0%ni, 96.7%id,  0.5%wa,  0.0%hi,  0.0%si,  0.0%st
    Mem:   1048576k total,   261740k used,   786836k free,        0k buffers
    Swap:            0k total,            0k used,            0k free,        0k cached

      PID    USER       PR  NI  VIRT  RES  SHR S  %CPU   %MEM    TIME+  COMMAND                                
          1   root         15   0  10372  736  624 S   0.0       0.1        1:41.86     init                                   
     5407   root         18   0  12424  756  544 S   0.0       0.1        0:13.71    dovecot                                
     5408   root         15   0  19068 1144  892 S  0.0       0.1        0:12.09    dovecot-auth                           
     5416   dovecot   15   0  38480 2868 2008 S  0.0       0.3        0:10.80    pop3-login                             
     5417   dovecot   15   0  38468 2880 2008 S  0.0       0.3        0:49.31    pop3-login                             
     5418   dovecot   16   0  38336 2700 2020 S  0.0       0.3        0:01.15    imap-login                             
     5419   dovecot   15   0  38484 2856 2020 S  0.0       0.3        0:04.69    imap-login                             
     9745   root        18   0  71548  22m 1400 S  0.0       2.2        0:01.39    lfd                                    
    11501  root        15   0  160m  67m 2824 S   0.0       6.6        1:32.51   spamd                                  
    23935  firewall   18   0  15276 1180  980 S   0.0        0.1        0:00.00   imap                                   
    23948  mailnull  15   0  64292 3300 2620 S   0.0       0.3        0:05.62   exim                                   
    23993  root       15   0  141m  49m 2760 S   0.0       4.8         1:00.87   spamd                                  
    24477  root       18   0  37480 6464 1372 S   0.0       0.6        0:04.17   queueprocd                             
    24494  root       18   0  44524 8028 2200 S  0.0        0.8        1:20.86   tailwatchd                             
    24526  root       19   0  92984  14m 1820 S  0.0       1.4         0:00.00   cpdavd                                 
    24536  root       33  18 23892 2556  680 S   0.0       0.2         0:02.09   cpanellogd                             
    24543  root       18   0  87692  11m 1400 S  0.0       1.1         0:33.87   cpsrvd-ssl                             
    25952  named    22  0 349m 8052 2076 S    0.0       0.8        20:17.42   named                                  
    26374  root       15  -4 12788  752  440 S    0.0       0.1         0:00.00   udevd                                  
    28031  root       17   0 48696 8232 2380 S   0.0       0.8         0:00.07   leechprotect                           
    28038  root       18   0 71992 2172  132 S   0.0       0.2         0:00.00   httpd                                  
    28524  root       18   0 90944 3304 2584 S  0.0       0.3         0:00.01   sshd

    For a graphical display of how the memory is being utilized, the Gnome System Monitor gives a detailed picture. There are other system monitors available under various window managers in Linux.


    What is Your CPU Doing?

    You may have a single, a dual core, or a quad core CPU in your system. To see what each CPU is doing or how two CPUs are sharing the load, you have to use ‘top’ or ‘htop’. These command line applications show the percentage of each CPU being utilized. You can also see process statistics, memory utilization, uptime, load average, CPU status, process counts, and memory and swap space utilization statistics.
    Similar output statistics may be seen by using command line tools such as the ‘mpstat’, which is part of a group package called ‘sysstat’. You may have to install ‘sysstat’ in your system, since it may not be installed by default. Once installed, you can monitor a variety of parameters, for example compare the CPU utilization of an SMP system or multi-processor system.
    Finding out if any specific process is hogging the CPU needs a little more command line instruction such as:
    ps -eo pcpu,pid,user,args | sort -r -k1 | less
    ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10
    Similar output can be obtained by using the command ‘iostat’ as root:
    root@gateway [~]# iostat -xtc 5 3
    Linux 2.6.18-028stab094.3 (         01/11/2012

    Time: 01:13:15 AM
    avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
                      2.38    0.01     0.43          0.46      0.00      96.72

    Time: 01:13:20 AM
    avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
                      3.89    0.00     0.26          0.09      0.00      95.77

    Time: 01:13:25 AM
    avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
                      0.31    0.00    0.15           1.07     0.00       98.47
    This will show three outputs every five seconds and show the information since the last reboot.
    CPU usage under GUI is very well depicted by the Gnome System Monitor and other system monitoring applications. These are also useful for monitoring remote servers. Detailed memory maps can be accessed, signals can be sent and processes controlled remotely.



    What’s Cooking?

    How do you know what processes are currently running in your Linux system? There are innumerable ways of getting to see this information. The handiest applications are the old faithfuls - ‘top’ and ‘htop’. They will give a real-time image of what is going on under the hood. However, if you prefer a more static view, use ‘ps’. To see all processes try ‘ps -A’ or ‘ps -e’:
    root@gateway [~]# ps -e
    PID TTY          TIME CMD
        1 ?          00:01:41 init
     3201 ?        00:00:00 leechprotect
     3208 ?        00:00:00 httpd
     3360 ?        00:00:00 httpd
     3490 ?        00:00:00 httpd
     3530 ?        00:00:00 httpd
     3532 ?        00:00:00 httpd
     3533 ?        00:00:00 httpd
     3535 ?        00:00:00 httpd
     3575 ?        00:00:00 httpd
     3576 ?        00:00:00 httpd
     3631 ?        00:00:00 imap
     3694 ?        00:00:00 httpd
     3705 ?        00:00:00 httpd
     3770 ?        00:00:00 imap
     3774 pts/0    00:00:00 ps
     5407 ?        00:00:13 dovecot
     5408 ?        00:00:12 dovecot-auth
     5416 ?        00:00:10 pop3-login
     5417 ?        00:00:49 pop3-login
     5418 ?        00:00:01 imap-login
     5419 ?        00:00:04 imap-login
     9745 ?        00:00:01 lfd
    11501 ?        00:01:35 spamd
    23948 ?        00:00:05 exim
    23993 ?        00:01:00 spamd
    24477 ?        00:00:04 queueprocd
    24494 ?        00:01:20 tailwatchd
    24526 ?        00:00:00 cpdavd
    24536 ?        00:00:02 cpanellogd
    24543 ?        00:00:33 cpsrvd-ssl
    25952 ?        00:20:17 named
    26374 ?        00:00:00 udevd
    28524 ?        00:00:00 sshd
    28531 pts/0    00:00:00 bash
    29834 ?        00:00:00 sshd
    30426 ?        00:11:27 syslogd
    30429 ?        00:00:00 klogd
    30473 ?        00:00:00 xinetd
    30485 ?        00:00:00 mysqld_safe
    30549 ?        1-15:07:28 mysqld
    32158 ?        00:06:29 httpd
    32166 ?        00:12:39 pure-ftpd
    32168 ?        00:07:12 pure-authd
    32181 ?        00:01:06 crond
    32368 ?        00:00:00 saslauthd
    32373 ?        00:00:00 saslauthd

    PS is an extremely powerful and versatile command, and you can learn more by ‘ps --h’:
    root@gateway [~]# ps --h********* simple selection *********  ********* selection by list *********
    -A all processes                                   -C by command name
    -N negate selection                              -G by real group ID (supports names)
    -a all w/ tty except session leaders        -U by real user ID (supports names)
    -d all except session leaders                  -g by session OR by effective group name
    -e all processes                                    -p by process ID
    T  all processes on this terminal             -s processes in the sessions given
    a  all w/ tty, including other users           -t by tty
    g  OBSOLETE -- DO NOT USE                -u by effective user ID (supports names)
    r  only running processes                      U  processes for specified users
    x  processes w/o controlling ttys            t  by tty
    *********** output format **********  *********** long options ***********
    -o,o user-defined   -f full                        --Group --User --pid --cols --ppid
    -j,j job control       s  signal                    --group --user --sid --rows --info
    -O,O preloaded    -o  v  virtual memory  --cumulative --format --deselect
    -l,l long                u  user-oriented         --sort --tty --forest --version
    -F   extra full        X  registers                --heading --no-heading --context
                        ********* misc options *********
    -V,V  show version        L  list format codes        f  ASCII art forest
    -m,m,-L,-T,H  threads   S  children in sum         -y change -l format
    -M,Z  security data       c  true command name  -c scheduling class
    -w,w  wide output         n  numeric WCHAN,UID  -H process hierarchy