Check if a php method is called statically or as an instance method

PHP allows to call you any method of a class both as static and as instance method. Recent version of PHP is more strict though. But still you can call non-static function as static. Calling a non-static function that uses $this statically will cause a fatal error. You’d  want to catch this and throw exception with meaningful message.

If you can determine how this method is called whether statically or as instance method you can handle this situation.

See this example class.

class C {
    private $value = "value";
    public function method () { 
        echo __METHOD__. "\n"; 
        echo "Value:". $this->value. "\n"; 
    }
}

This method will raise a fatal error you if call it as C::method().

To determine if its this is called statically or as an instance method there are two techniques.

  1. Using debug_backtrace
    1. class A { 
       public function m(){ 
       $bt = debug_backtrace(); 
       if($bt[0]['type']=='::')
       throw new Exception(__METHOD__." is called statically");
       }
      }
      
      
  2. Using isset on $this.
    1. class A { 
       public function m(){ 
       if(!isset($this))
       throw new Exception(__METHOD__." is not called from class instance");
       }
      }

 

The last one is shorter. But first one is more explicit.

Determine outgoing apache bandwidth usage with built in commands

To find the bandwidth usage of your Apache server, you can use many existing tools. Like vnstat, awstat.  The most common thing about these tools is they need the utility installed. What if you dont have this installed and you want to calculate your bandwidth? This is can be easily done by parsing Apache access logs. This technique will only work if you are a web master and you have no other bandwidth eating service other than apache. Most web developers will fall in this category. So here is the technique to find apache bandwidth usage.

Note: You need ssh access to perform these actions. Also I assume you have not deleted your log files.

  1. Determine the date range for which you want to find the bandwidth usage for. For example I want to deter mine bandwidth usage from Oct 1st 2012 to Oct 30 2012. Note you must have access log files for that range.
  2. Now the big command. Assuming your apache log directory is /var/log/apache2
    1. find /var/log/apache2 -type f \
          -name '*.gz' \
          -newermt "2012-10-01 00:00:00" \
          -not \
          -newermt "2012-10-30 23:59:59" \ 
          -exec zcat '{}' \; | 
      egrep '"[^"]+" +200 [0-9]+' -o | 
      awk '{sum+=$(NF)} END {print sum/1024/1024/1024 " GB"}'
  3. This will print something like “34.345 GB

Splitting Huge MySQL dump for easy restore

The open source RDBMS MySQL provides a great tool mysqdump to dump database.  Its the official sql dump utitlity for MySQL database. It makes the life of dba so easy that he can backup and restore the database within just two commands.  But sometimes due to the lacking of the infrastructure you can not dump and restore that easily. Specially when you are dealing with huge amount of data. Our database grows over time. Few hundred GBs are quite common. But if you run a software for long it might get in to Tera byte range. The problem starts when size is this huge.

Backing up and restoring such huge database is not easy.  It takes a lot of time to both backup and restore. Not to mention, if something breaks up the operation the output will most time corrupted. For backup operation the output  is a dump file. This will be corrupted if backup is interrupted. Just think if it breaks at 99%. It feels like pulling your hair! Think if it was a restore operation. Your database will be corrupted. Usually we do restore on production server. If the restore fails, that’ll be a disaster.

To come round this, Its better to backup and restore part by part. This way your database will less likely be corrupted. Here is some ways to achieve this,

  1. Split the database by tables. When backing up, split the backup operation by tables.  Just do some balance on the table size and then backup on each table. For example, If you have 3 tables 10 GB each and 2 tables 30 GB each then you can backup tables in 3 groups.
    • Group 1. All the 3 tables 10 GBs each totaling 30 GB
      • mysqldump  database1 table1 table2 table3 > table1-3.sql
    • Group 2. One of the two 30 GB tables.
      • mysqldump  database1 table4 > table4.sql
    • Group 3. One of the two 30 GB tables.
      • mysqldump  database1 table5 > table5.sql
  2. Split the big tables by rows. Not all tables in the database are big. Rather we always have some tables which are big enough to cause headache for the DBA. Think about the tweeter. Assuming a simple database schema, we can say all tables cumulatively not bigger than the table that holds the tweets. In our previous example table4 and table5 is bigger.  For such big tables we can split the dump by rows. mysqldump provides a great option called “–where”. By using this option you can split your table on any condition which is supported by “where” sql clause. So the number of ways you can split the table is infinite. Here is some strategies,
    1. auto_increment. If your table has auto column  you can split by any number of chunks.  See the examples.
      1. Split the odd and even rows
        mysqldump --where "id%2=0" database1 table4 > table4_even.sql
        mysqldump --where "id%2=1" database1 table4 > table4_odd.sql
      2. Using any number
        mysqldump --where "id%4=0" database1 table5 > table5_0.sql
        mysqldump --where "id%4=1" database1 table5 > table5_1.sql
        mysqldump --where "id%4=2" database1 table5 > table5_2.sql
        mysqldump --where "id%4=3" database1 table5 > table5_3.sql
    2. limit clause. Even you are using where clause, with a little hack you can use limit clause. This allows true partition of the tables.  For example if you have 18000 rows in a table you can dump in two parts like this,
      mysqldump --where "1 LIMIT 0, 10000" database1 table5 > table5_part_1.sql
      mysqldump --where "1 LIMIT 10000, 10000" database1 table5 > table5_part_2.sql
    3. Others. As the –where switch allows any sql condition you can use any criteria. You can split a user table by sex (male and female), by age group, by any enum column you are using, by date span etc. Some examples,
      mysqldump --where "sex='M'" database1 user > user_m.sql
      mysqldump --where "sex='F'" database1 user > user_f.sql
      mysqldump --where "account='ACTIVE'" database1 user > user_active.sql
      mysqldump --where "account='INACTIVE'" database1 user > user_inactive.sql
      mysqldump --where "year(date) <= 2008" database1 payment > payment_prior_2009.sql
      mysqldump --where "year(date) > 2008" database1 payment > payment_2009_and_after.sql