Wednesday, August 1, 2012

Zend Queue with Magento


Zend Queue with Magento

By using Zend Queue with magento we can create an event driven asynchronus integration system.

Just think about how much work you will offload from Magento when integrating with other systems and Magento itself.

To keep things simple, let's pretend that you send emails to your customers everytime a new product gets added. Usually these products get added during the day which also happens to be when your customers are most active buying in the site. An alert about a new product is very important but you don't want to bug down your email server. That's where Zend Queue comes to the rescue. In this example I am using mysql to store the queue. If you follow the same example remember to create the tables first, these can be found under: lib/Zend/Queue/Adapter/Db/mysql.sql

Also it will make more sense to use MemcacheQ or Apache ActiveMQ to offload mysql.

Here's an example:

In your observer class:

public function sendEmails($observer){
 Mage::helper('OfflineSync')->setEmailsOffline($observer->getData('object_container'));

}

Helper class

/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
 */

/**
* Description of Data
 *
* @author letas
 */
class ZendQueue_OfflineSync_Helper_Data extends Mage_Core_Helper_Data {

    protected $_name = "general";
    protected $_registry = array();
    protected $_queue = null;

    protected function getQueue() {
        if (!isset($this->_registry[$this->_name])) {
            $db = simplexml_load_file('app' . DS . 'etc' . DS . 'local.xml');
            $db = $db->global->resources->default_setup->connection;
            $queueOptions = array(
                Zend_Queue::NAME => "{$this->_name}",
                'driverOptions' => array(
                    'host' => $db->host,
                    'port' => $db->port,
                    'username' => $db->username,
                    'password' => $db->password,
                    'dbname' => $db->dbname,
                    'type' => 'pdo_mysql',
                    Zend_Queue::TIMEOUT => 1,
                    Zend_Queue::VISIBILITY_TIMEOUT => 1
                )
            );
            //// Create a database queue
            $this->_registry[$this->_name] = new Zend_Queue('Db', $queueOptions);
        }
        return $this->_registry[$this->_name];
    }

    public function getEmailsOffline() {
        try {
            $this->_name = "offline_email";
            //cache here / singlenton
            $this->_queue = $this->getQueue();
            foreach ($this->_queue->receive(); as $i => $message) {
               //send the real mail now

    //delete this message
   $this->_queue->deleteMessage($message);
            }
        } catch (Exception $e) {
            Mage::logException($e);
            return -1;
        }
        return 1;
    }

    public function saveEmailsOffline($emails) {
        if (isset($emails)) {
            if (is_array($emails) || is_object($emails)) {
                $emails = serialize($emails);
            }
            $this->_name = "offline_email";
            //cache here / singlenton
            $this->getQueue()->send($emails);
        }
        return $this;
    }

}

Now you can also configure a crontab in your module config.xml, your code may look like this:

 Mage::helper('OfflineSync')->getEmailsOffline();

And you are done!!! Pretty simple right.

The possibilities are endless. Orders / Customer exports. Everything can be queued up.

Cloning magento modules

Cloning magento modules will never this easy again.

Today I wanted (read was forced) to create a module based on one of Magento's core. Literally I needed to clone some of the Magento modules and while the copy and paste is simple stuff, going class by class and file by file renaming and configuring files is not joke.

So why do I need to clone it instead of just doing the usual OOP stuff Magento is so good at? Simply because the functionality is really different in most files and settings. So creating a payment method or paygate or giftcard module is easier when you have an skeleton to work with. Think about it, how different is paying with PayPal from Google Checkout? Functionality wise not that much, essentially they do the same thing but implement it differently.

I remember someone (I am talking to you Alan Storm) saying: "It's programming - come up with a canonical way of doing it, put it in a function and forget it about it".

So of course I decided that cloning magento modules was never going to be a hard task again.

So from today onward cloning magento modules is going to be pretty simple (a least to me).
 Be warned there are some hard-coded paths here and little to none validation. Use at your own risk and needless to say don't use it on a production site. The script doesn't do about the app/etc/modules/module_name.xml so you have to do manually.

Copy the following code into a file and save as clone, no extensions needed:
#!/bin/bash

ORIGINAL_NAME=$1;
NEW_NAME=$2;
NAMESPACE="MyCompany";

#copy the module
`cp -R app/code/core/Mage/$ORIGINAL_NAME app/code/local/$NAMESPACE/$NEW_NAME`
#lowercase both the original name and new name
lowercase_orig=`echo $ORIGINAL_NAME | tr '[A-Z]' '[a-z]'`
lowercase_new=`echo $NEW_NAME | tr '[A-Z]' '[a-z]'`
#Rename the class declaration and stuff
`grep -lr "$ORIGINAL_NAME" "app/code/local/$NAMESPACE/$NEW_NAME/" | xargs -d "\n" sed -i "s/$ORIGINAL_NAME/$NEW_NAME/g"`
`grep -lr "Mage" "app/code/local/$NAMESPACE/$NEW_NAME/" | xargs -d "\n" sed -i "s/Mage/$NAMESPACE/g"`
#rename the  the shorcuts
`grep -lr "$lowercase_orig" "app/code/local/$NAMESPACE/$NEW_NAME/" | xargs -d "\n" sed -i "s/$lowercase_orig/$lowercase_new/g"`
#rename the files
`find "app/code/local/$NAMESPACE/$NEW_NAME" -name "*$ORIGINAL_NAME*" -exec rename "s/$ORIGINAL_NAME/$NEW_NAME/g" {} \;`

The trick here is to use find and grep to find the old module name and class shorcuts and use sed and rename to change it to the new one. Notice that tr is used to lowercase the old and new name because we are doing case sensitive searches. Also rename and sed are only replacing the portion of the text they find.

First let's make sure we can execute the file
chmod +x clone (only need to do this once) 

Then in the root of your Magento installation do:
 ./clone module1 module2

And then you will have it a brand new module cloned from Core to use as your starting point.

Full Page cache with nginx and memcache

Full Page cache with nginx and memcache

Since the cool kids at Google, Microsoft and Amazon researched how performance and scalability affect conversion rates, page load time has become the topic of every eCommerce store.

Magento was once a resource hog that consumated everything available to it and you had to be a magician to pull off some awesome benchmarks without using any reverse proxy or full page cache mechanism. Creating a full page cache with nginx and memcache is really simple (right after hours of research).

Words of warning first:

Don't use this instead of varnish or Magento's full page caching. This implemenation of full page cache is very simple, heck it will be even troublesome to clean the cache consistently because guess what, there is no holepunching but you could enhance the configuration file to read cookies and serve directly from the backend server instead.

Another problem is that you'll need to ensure that a TwoLevel caching is used to be able to flush specific urls.

Now that is out of the way, let's focus on the matter at hand.

I have tried this configuration file with both Magento enterprise and community and also with WordPress.


#memcache servers load balanced
upstream memcached {
        server     server_ip_1:11211 weight=5 max_fails=3  fail_timeout=30s;
        server     server_ip_2:11211 weight=3 max_fails=3  fail_timeout=30s;
        server    server_ip_3:11211;
 keepalive 1024 single;
}
#fastcgi - little load balancer
upstream phpbackend{
 server     server_ip_1:9000 weight=5 max_fails=5  fail_timeout=30s;
        server     server_ip_2:9000 weight=3 max_fails=3  fail_timeout=30s;
        server    server_ip_3:9000;
}
server {
    listen   80; ## listen for ipv4; this line is default and implied
    root /var/www/vhosts/kingletas.dev/www;
    server_name kingletas.dev;
    index index.php index.html index.htm;

    client_body_timeout  1460;
    client_header_timeout 1460;
    send_timeout 1460;
    client_max_body_size 10m;
    keepalive_timeout 1300;

    location /app/                { deny all; }
    location /includes/           { deny all; }
    location /lib/                { deny all; }
    location /media/downloadable/ { deny all; }
    location /pkginfo/            { deny all; }
    location /report/config.xml   { deny all; }
    location /var/                { deny all; }

   location ~* \.(jpg|png|gif|css|js|swf|flv|ico)$ {
     expires max;
     tcp_nodelay off;
     tcp_nopush on;
    }
    location / {
  
        try_files $uri $uri/ @handler;
        expires 30d;
    }
   location @handler {
 rewrite / /index.php;
    }

    location ~ \.php$ {
        if (!-e $request_filename) { 
            rewrite / /index.php last; 
        }  
        expires        off; ## Do not cache dynamic content
        default_type       text/html; charset utf-8;
        if ($request_method = GET) { # I know if statements are evil but don't know how else to do this
            set $memcached_key $request_uri; Catalog request modal 
            memcached_pass     memcached;
            error_page         404 502 = @cache_miss;
            add_header x-header-memcached true;
  }
  if ($request_method != GET) {
   fastcgi_pass phpbackend;
  }
    }
    location @cache_miss {
        # are we using a reverse proxy?
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_redirect off;
        proxy_max_temp_file_size 0;
        
        #configure fastcgi
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_send_timeout  5m;
        fastcgi_read_timeout 5m;
        fastcgi_connect_timeout 5m;
        fastcgi_buffer_size 256k;
        fastcgi_buffers 4 512k;
        fastcgi_busy_buffers_size 768k;
        fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; 
        fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name; 
        fastcgi_param  PHP_VALUE "memory_limit = 32M";
        fastcgi_param  PHP_VALUE "max_execution_time = 18000";
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
    location ~ /\. {
  deny all;
 }
}
#if you want to make it even better your own cdn
#server {
#      listen 80; 
#      server_name media.kingletas.dev;
#      root /var/www/vhosts/kingletas.dev/www;
#}
#server {
#      listen 80; 
#      server_name css.kingletas.dev;
#      root /var/www/vhosts/kingletas.dev/www;
#}
#server {
#      listen 80; 
#      server_name js.kingletas.dev;
#      root /var/www/vhosts/kingletas.dev/www;
#}

One major topic to remember is that nginx will try to read from memory not write to it. In other words you still need to write the contents to memcache. For WordPress this is what I did in the index.php

/**
* Front to the WordPress application. This file doesn't do anything, but loads
* wp-blog-header.php which does and tells WordPress to load the theme.
 *
* @package WordPress
 */

/**
* Tells WordPress to load the WordPress theme and output it.
 *
* @var bool
 */
ini_set("memcache.compress_threshold",4294967296); //2^32
ob_start();

define('WP_USE_THEMES', true);

/** Loads the WordPress Environment and Template */
require('./wp-blog-header.php');

$buffer = ob_get_contents();

ob_end_clean();

$memcache_obj = memcache_connect("localhost", 11211);
memcache_add($memcache_obj,$_SERVER['REQUEST_URI'],$buffer,0);

echo $buffer;


Notice that I had to change the memcache.compress_threshold setting to HUGE number, that is because memcache will ignore the no compress setting when this threshold is exceeded and compress the content, while this is good and dandy the results in the browser are not.

So there you have it an easy way to implement full page caching with nginx and memcache for WordPress or Magento and the rest of the framework world.