Sunday, September 23, 2007

Turbo-charge your PHP website with query caching

Many PHP websites also rely on some kind of back-end storage, most common I would guess is MySQL. But no matter what database server you utilize, you will sooner or later come to a point in time where your traffic exceeds your servers capabilities. First time this happens you will probably start optimizing your queries, perhaps caching some results in the session, adding a few indices to your tables and so on. This works for some time, and even might work over and over again, but working like this, killing fires as they flame up, can be quite stressful (and annoying!)

So, how to avoid this then? I have developed a simple enough method that cache entire query results using the query as the cache key that works out great for me most of the time. Since the queris can be quite long, I really use the MD5 sum of the query, but that's just one strategy. I use a range of different cache backends for storing my cache data. Sometimes I can use a local fast filesystem (like tmpfs), so I use a file-based cached, but in some cases when I need extreme performance, I might "couple" shared-memory and memcached storage. Caching some data in the session might still be a good idea though.

Getting it right from the start is one thing, if you're able to do that (your site is not crwaling on its knees quite yet :). It will definately help you a lot. If you're already stuck with thousands and thousands lines of code, maybe this will help you get started..?

You need to look closely at every piece of data you pull from the database and consider if, how, and for how long you would be able to cache this "object". You will also need to determine when it needs to be expired (re-read) from the database.

Let's take a look at a simple example from a typical site, where querying the number of unread messages for a (logged in) user is a common operation (maybe even every page load).


// Example code:
function getNumUnreadMessages($user_id)
{
$sql = "SELECT COUNT(*) FROM messages WHERE recevier_user_id = $user_id AND status='unread'";
$results = db_fetch_value($sql);

return $results;
}


A quick thought tells us it's not really efficient to check this value every page load. The first approach might be to add a variable holding the timestamp of the last check in the session and check against this each time the function is called, but this will quickly become a little bit cluttered and tricky to keep track of when you have alot of queries.

Providing a simple wrapper function to the above database call is the first step towards implementing cached queries. Some code says more than a thousand words:


function cache_db_fetch_value($sql, $cache_key = false, $cache_time=600)
{
if(!$cache_key)
$cache_key = md5($sql);
$res = Cache::get($cache_key);
if($res === FALSE) {
$res = db_fetch_value($sql);
if($res !== FALSE) {
Cache::set($cache_key, $res, $cache_time);
}
}

return $res;
}

function getNumUnreadMessages($user_id)
{
$sql = "SELECT COUNT(*) FROM messages WHERE recevier_user_id = $user_id AND status='unread'";
$results = cache_db_fetch_value($sql);

return $results;
}



I guess you get the picture? We wrap all calls to our original db_fetch_value() in a method that handles the caching "for us", automatically.


However, this will not immediately notify the user if he gets a message, to do that we need to tweak a little bit more. We need to specify the cache-key in a way that makes it identifiable via the user_id when caching the number of unread messages. Then, when a message is sent to a user, we simply need to Cache::remove() it, and the next time the user checks his unread messages, he will find that he has one!




function getNumUnreadMessages($user_id)
{
$sql = "SELECT COUNT(*) FROM messages WHERE recevier_user_id = $user_id AND status='unread'";
$results = cache_db_fetch_value($sql, "unread_messages_{$user_id}");

return $results;
}

function sendMessage($sender_id, $receiver_id, $message)
{
// INSERT INTO messages ...
Cache::remove("unread_messages_{$receiver_id}");
// ...
}



Let's go on with trying out a simple Cache implementation that you can start trying out with your own code.


This Cache class provides three basic self-explanatory methods, set,get and remove. The first function argument is always the cache_key. This implementation is very simple and stores cached objects on file in a temporary directory. I am sure you can build something more suitable for your environment.



public class Cache
{
static CACHE_DIR = "/tmp/cache";

function makeFileName($cache_key)
{
return Cache::CACHE_DIR . DIRECTORY_SEPARATOR . md5($cache_key);
}

function set($cache_key, $obj, $cache_time=600)
{
$filename = $this->makeFileName($cache_key);
$cache_obj = Array(time()+$cache_time, $obj);
if(file_put_contents($filename, serialize($cache_obj), LOCK_EX)) {
return TRUE;
}
return FALSE;
}

function get($cacke_key)
{
$filename = $this->makeFileName($cache_key);
if(is_file($cache_key)) {
list($expire, $obj) = unserialize(file_get_contents($filename));
if($expire < time()) {
@unlink($filename);
return FALSE;
}
return $obj;
}
return FALSE;
}

function remove($cache_key)
{
$filename = $this->makeFileName($cache_key);
@unlink($filename);
}
}



Of course you don't have to use this for only caching queries, it can be useful to cache many other things, like RSS streams, config files, static files etc etc.


Happy Caching for now!

Sunday, August 5, 2007

Fix the PNGs

While developing a recent website I ran into the all classic PNG transparency problem, where IE before version 7 needs th AlphaImageLoader to correctly handle transparent PNG images. While this doesn't work properly for CSS background images, it makes it possible to use PNGs to a certain degree of satisfactory.

There's a bunch of scripts out there to "automagically" handle loading of PNG images, but the most elegant I found somewhere was the pngbehaviour.htc file, which adds a behaviour to the img tag in IE using a css "trick":


img {
behavior: url("../css/pngbehavior.htc");
}


I did have to make some changes to the script, firstly because not all PNG images used had a suffix of ".png", and second because it detected compability with IE 7,8 and 9, where PNG transparency works, and only added extra overhead to IE users with a working version.

The modified pngbehaviour script follows:


<public:component>
<public:attach event="onpropertychange" onevent="propertyChanged()" />
<script>

var supported = /MSIE (5\.5)|[6]/.test(navigator.userAgent) && navigator.platform == "Win32";
var realSrc = "";
var blankSrc = "/img/web/trans.gif";

if (supported) fixImage();

function propertyChanged() {
if (!supported) return;

var pName = event.propertyName;
if (pName != "src") return;
// if not set to blank
if ( ! new RegExp(blankSrc).test(src))
fixImage();
};

function fixImage() {
// get src
var src = element.src;
var fixme = element.fixme;

// check for real change
if (src == realSrc) {
element.src = blankSrc;
return;
}

if ( ! new RegExp(blankSrc).test(src)) {
// backup old src
realSrc = src;
}

// test for png
if ( /\.png$/.test( realSrc.toLowerCase() ) ||
(fixme && fixme == 1)) {
// set blank image
element.src = blankSrc;
// set filter
element.runtimeStyle.filter = "progid:DXImageTransform.Microsoft.AlphaImageLoader(src='" + src + "',sizingMethod='image')";
}
else {
// remove filter
element.runtimeStyle.filter = "";
}
}

</script>
</public:component>


I have introduced a new attribute (which breaks xhtml, yes), named "fixme", to "force" the script above to "fix" images that do not have a suffix of .png (automatically generated images). Example:
<a href="/generateImage1" fixme="1"/>

Enjoy

Friday, July 20, 2007

How to share your laptop WiFi connection

I recently moved into a countryside apartment with no phone line, which means the only ways of getting online would be to get one of those Super-3G HSDPA modems, but that'd give me at the most about 3mbit downstream, OR, I could set up a wireless link from the next house (which has a 8/1 mbit ADSL line) using regular WiFi.

The second problem then would be how to connect the "server" I have standing under my dekstop, since it only has a wired connection. Solution 1 would be to buy a USB WiFi card and plug it in, but since I'm "broke", and I'm not sure what card will work with Ubuntu (probably most, but anyway).

Since I had one old D-Link DWL700 AP and also a DI-604 lying around I decided to connect the (WinXP) laptop using WiFi, and then share the laptop WiFi connection through the LAN port. I guess any basic AP and switch would do the trick, heck you could even use a crossed LAN cable between laptop and server.

It's all pretty straight forward once you get the basic components up; connecting the AP is easy, just plug it into the existing LAN switch in the neighbouring house, make sure it's correctly placed to get full signal strength to my apartment (which wasn't very tricky, window-to-window it's like 10 meters between the houses with no obstructions).

After that, I plugged both the laptop wired connection and the "server" into the DI-604 switch, configured a separate LAN network where the laptop got an IP of 192.168.50.1 and the server got 192.168.50.2. I set the default route of the laptop LAN interface to the WiFi AP IP address (defaulted to 192.168.0.50, which suited me fine), and the server gets a default route of 192.168.50.1 (the laptop). And.. Voilá!

This was all I needed to configure. Needless to say, I was overjoyed! No fuzzing around with XP ICS, no extra routing etc, It Just Works ™

Still, there's problems. The DWL 700 AP is a very simplistic AP, and does not have all the fancy routing and firewall features I need to forward ports to my server etc, so I'll probably invest in a better high-end AP once I get the money.

Now, if anyone tried other nice ways of sharing connections, please let me know..

Sunday, July 15, 2007

No Posts and how to make a GUID

There's not been any posts for quite a while now. Sorry for that, but been working on a couple of projects, building an "apartment", and also a new web/mobile project which is due for release tomorrow. I'll post the link(s) then!

For now I'll post a piece of PHP code that I was required to write for a RPC this project. Might come in handy to some (win32) users.

/**
 * Creates a unique GUID string.
 *
 * @return string Unique GUID
 */
function makeGUID()
{
  $ls = Array(8,4,4,4,12);
  $chrs = "0123456789abcdef";
  $guid = "";
  $chlen = strlen($chrs)-1;
  foreach($ls as $len) {
    if($guid != "")
      $guid .= "-";
    for($i=0; $i<$len; $i++)
      $guid .= $chrs[rand(0, $chlen)];
  }
  return $guid;
}

Sunday, March 11, 2007

Reworking a message table

In one of the databases I manage, there is currently two large tables containing Messages and
Guestbook-entries. Both of these tables are huge, and is starting to become the primary db-servers' main concern.

Create definition of the current (old) table:


CREATE TABLE `message` (
`id` int(10) unsigned NOT NULL auto_increment,
`sender_id` int(11) unsigned NOT NULL default '0',
`receiver_id` int(11) unsigned NOT NULL default '0',
`sent` datetime NOT NULL default '0000-00-00 00:00:00',
`status` enum('unread','read','archived','replied','deleted') NOT NULL default 'unread',
`massmess_id` int(10) unsigned default NULL,
`title` varchar(127) NOT NULL default '',
`message` text NOT NULL,
`show_out` enum('Y','N') NOT NULL default 'Y',
PRIMARY KEY (`id`),
KEY `massmess_id_idx` (`massmess_id`),
KEY `receiver_id_sent_idx` (`receiver_id`,`sent`),
KEY `receiver_id_status_idx` (`receiver_id`,`status`),
KEY `sender_id_sent_idx` (`sender_id`,`sent`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

(The guestbook table looks very similar.)

Needless to say, holding 7 million rows, it was time to re-design and also consider joining the two big tables into a single database structure that could scale this data more efficiently.

Considering that I run these tables in InnoDB, I would ofcourse want to take advantage of it's
index clustering feature, and make sure we don't duplicate data too much.

For any efficient design to work, I needed to look at the current queries executed in the messaging system.

There are 4 major query types (% are guestimated):
Count new messages
SELECT COUNT(*) FROM message WHERE receiver_id=X AND status='unread' (90%)
Read inbox messages:
SELECT basic_fields FROM message [JOINS] WHERE receiver_id=X; (7%)
Read outbox messages:
SELECT basic_fields FROM message [JOINS] WHERE sender_id=X; (2%)
Read full message:
SELECT most_fields FROM message WHERE id=X; (1%)

Mainly considering these queries, I ended up with the following new tables (no FOREIGN KEYS in this example):

CREATE TABLE message_receiver (
receiver_id int unsigned not null,
message_id int unsigned not null,
type enum('message', 'guestbook') NOT NULL DEFAULT 'message',
status enum('unread','read','archived','replied','deleted') NOT NULL default 'unread',
PRIMARY KEY(receiver_id, message_id, type, status)
) ENGINE=InnoDB;

CREATE TABLE message_sender (
sender_id int unsigned not null,
message_id int unsigned not null,
type enum('message', 'guestbook') NOT NULL DEFAULT 'message',
status enum('unread','read','archived','replied','deleted') NOT NULL default 'unread',
PRIMARY KEY(sender_id, message_id, type, status)
) ENGINE=InnoDB;

CREATE TABLE message_detail (
`message_id` int unsigned NOT NULL default '0',
`sender_id` int(11) unsigned NOT NULL default '0',
`receiver_id` int(11) unsigned NOT NULL default '0',
`sent` datetime NOT NULL default '0000-00-00 00:00:00',
`massmess_id` int(10) unsigned default NULL,
PRIMARY KEY (message_id, receiver_id),
KEY `massmess_id_idx` (`massmess_id`),
KEY sender_idx (sender_id)
) ENGINE=InnoDB;

CREATE TABLE message_data (
`message_id` int unsigned NOT NULL default '0',
`title` varchar(127) NOT NULL default '',
`message` text NOT NULL,
PRIMARY KEY (`message_id`)
) ENGINE=InnoDB;



This structure provides for extremely high-speed access for counting the unread messages, and also for listing the inbox and outbox, due to InnoDB clustering of the primary keys in message_receiver and message_sender tables. The small size of these tables also make them fit better in the innodb data buffer.

Now for the last trick, which saves me having to rewrite some of the code, but still benefit from the optimizations of the new structure; A view. MySQL supports it, it works, so let's use it.

Defining the view is simple, and I create it to mimic the definition of the old "message" table:
CREATE VIEW message AS   SELECT MR.message_id, MR.receiver_id, MR.type, MR.status,
MD.sent, MD.massmess_id,
DA.title, DA.message
FROM message_receiver MR
INNER JOIN message_detail MD ON MD.message_id=MR.message_id
INNER JOIN message_data DA ON DA.message_id=MR.message_id;

While the view only allows us to read data the way we used to (multi-table updates on a view is not yet possible), it means I need not rewrite all of the code in my application, but only change the methods that modify data.

The new optimized query for counting unread messages:
Count new messages:
SELECT COUNT(*) FROM message_receiver WHERE receiver_id=X AND status='unread'

From EXPLAIN, I can see that this query now "Uses index", and I can also see that this index is used even if I perform the original count-query against the view. Amazing!

Left now is populating the new tables and moving the old tabel out of the way for the view.
ALTER TABLE message RENAME message_old;

INSERT INTO message_receiver (receiver_id,message_id,status) SELECT receiver_id,id,status FROM message;
INSERT INTO message_sender (sender_id,message_id,status) SELECT sender_id,id,status FROM message;
INSERT INTO message_detail (message_id,receiver_id,sender_id,sent,massmess_id) SELECT id,receiver_id,sender_id,sent,massmess_id FROM message_old;
INSERT INTO message_data (message_id,title,message) SELECT id,title,message FROM message_old;

That's it for the database part of things. Time to dive in to the code and make this work =)

Comments appreciated!

Wednesday, February 7, 2007

Setting up a PHP / MySQL development server


This is a Quick Walkthrough, or whatever of my development server install. Someone might find it useful. Sometime. I hope. This is all The Way I Like Ittm


The initial requirements for my development server this time was; MySQL, Web server, CVS, PHP 5.2 and memcached.


Install Fedora Core 6, packages and partitions as you like. I use /data and /logs partitions, as I have loads of disk and small projects. For software packages, I leave out just about everything except for firewall and emacs (I do love emacs!). Every developer gets his own user.


Log in as root.



Install MySQL:

root@dev# yum install -y mysql-server mysql-devel
(edit config in /etc/my.cnf to your needs)
root@dev# service mysql start


Dump sendmail (just don't like it) for postfix:
root@dev# yum remove sendmail
root@dev# yum install -y postfix

Install packages required for webserver (I use lighttpd, it rocks)
root@dev# yum install -y lighttpd lighttpd-fastcgi

Libraries for memcached:
root@dev# yum install -y libevent libevent-devel

Download and untar memcached and PHP memcache extension:
root@dev# wget http://www.danga.com/memcached/dist/memcached-1.2.1.tar.gz
root@dev# wget http://pecl.php.net/get/memcache-2.1.0.tgz
root@dev# tar xfz memcached-1.2.1.tar.gz
root@dev# tar xfz memcache-2.1.0.tar.gz

Build and install memcached:
root@dev# cd memcached-1.2.1/; ./configure; make install

Install compilers and libraries for PHP (I need freetype, xml & curl, you might not):
root@dev# yum install -y gcc gcc-c++ flex libjpeg libjpeg-devel \
libpng libpng-devel mysql-devel libxml2-devel \
curl-devel freetype-devel

Configure and build PHP (your configure options may vary):
root@dev# cd php-5.2.0
root@dev# ./configure --enable-fastcgi --enable-discard-path \
--enable-force-redirect --with-mysql --with-gd \
--with-curl --enable-gd-native-ttf \
--without-sqlite --with-memcache=../memcache-2.0.1 \
--enable-sockets --with-libjpeg-dir=/usr/lib \
--with-png-dir=/usr/lib --with-zlib-dir=/usr/lib
root@dev# make install

Build and install memcache PHP extension:
root@dev# yum install -y autoconf
root@dev# cd memcache-2.1.0/
root@dev# phpize
root@dev# ./configure
root@dev# make install

Add to / edit /usr/local/lib/php.ini:
extension_dir=/usr/local/lib/php/extensions/no-debug-non-zts-20060613/
extension="memcache.so"

Edit /usr/local/lighttpd/conf/lighttpd.conf to add PHP as FastCGI and user dir support.
Also make sure mod_userdir and mod_fastcgi is enabled in server.modules:

userdir.path = "public_html"

fastcgi.server = ( ".php" => ((
"bin-path" => "/usr/local/bin/php",
"socket" => "/tmp/php.socket",
"max-procs" => 2,
"bin-environment" => (
"PHP_FCGI_CHILDREN" => "8",
"PHP_FCGI_MAX_REQUESTS" => "10000"
),
"bin-copy-environment" => (
"PATH", "SHELL", "USER"
),
"broken-scriptfilename" => "enable"
)))

Open firewall hole for HTTP. Edit /etc/sysconfig/iptables, and add (inbetween the other RH-Firewall-1-INPUT rules):
-A RH-Firewall-1-INPUT -p tcp --dport 80 -j ACCEPT
root@dev# service iptables restart


Then, Fire Up The Webserver!
root@dev# service lighttpd start


Problems? Back-track, read logs and use strace if you need to.

Installing CVS is a cake:
root@dev# yum install -y cvs

For setting up repositories and such, I recommend the CVS Book, just Google it.

XP performance, (maybe) it can be done!

XP tweking night it seems, after building my brothers "delivered-in-pieces-i-hope-everything-is-there" computer successfully (he's online now!), I stumbled across this post, giving you the, imho, best XP performance "tricks" or "hacks", whatever you call them compilations you need to get started in tweaking your XP. I find them most useful, and my XP performs well now even on my, now "old", laptop.


One of my other favourite tweaks are the FireFox network tweaks. Check out about:config in your browser (you're not in IE, are you?) and take a look at these variables:

network.http.max-connections
network.http.max-connections-per-server
network.http.pipelining
network.http.pipelining.maxrequests
There are more, but these are the most interesting ones.

My current values are:

network.http.max-connections = 32
network.http.max-connections-per-server = 12
network.http.pipelining = true
network.http.pipelining.maxrequests = 21
Also, there's similar settings for you that use proxies, I usually dont.

Enjoy your tweaking evening, and stack up with something to snack during the reboots ;=)

Sunday, February 4, 2007

Recursive directory traversal

Today I found myself in need of traversing a directory structure with millions of files and match them against an existing database, in order to free up some storage.

At least one good thing came right out of it;
A nice clean recursive directory traversal function for whenever you need to process a directory tree. It uses hooks so you can implement whatever action you need for each file and directory.

Simple to use:
process_dir("/path/to/dir", "filehook", "dirhook", 2);
Where "filehook" and "dirhook", if set, are arguments to call_user_func (so you can call class methods) and "2" is the max level of directories to descend into.

File- and directory hook function examples:

<?
function dirhook($path$dir)
{
  print 
"dirhook: " $path DIRECTORY_SEPARATOR $dir "\n";
  return 
true;
}

function 
filehook($path$file)
{
  print 
"filehook. " $path DIRECTORY_SEPARATOR $file "\n";
  return 
true;
}
?>



If either dirhook or filehook returns false, processing of the current directory is aborted.

SO, here's the code then. Send me a note if you use it or have suggestions for improvements, ok?



<?
/**
 * Recursive directory traversal function.
 *
 * Author: orIgo (mrorigo@gmail.com)
 * Use, modify and share, but leave my name in here, ok?
 *
 * @param $path      Path of start directory
 * @param $filehook  File callback function
 * @param $dirhook   Directory callback function
 * @param $maxdepth  Max levels of directories to descend into
 */
function process_dir($path,
             
$filehook=null,
             
$dirhook=null,
             
$maxdepth=null,
             
$depth=0)
{
  if(
$maxdepth && 
     
$depth $maxdepth)
    return;

  
$dir opendir($path);
  if(!
$dir)
    return;  
// PHP Generates a warning if opendir fails, no need to print more

  
while (false !== ($file readdir($dir))) 
  {
    if(
$file !== "." && $file !== ".."
    {
      
$fullpath $path DIRECTORY_SEPARATOR $file;
      if(
is_dir($fullpath)) 
      {
    if(
$dirhook)
      if(!
call_user_func($dirhook$path$file))
        break;
    
process_dir($fullpath$filehook$dirhook$maxdepth$depth+1);
      }
      else {
    if(
$filehook)
      if(!
call_user_func($filehook$path$file))
        break;
      }
    }
  }
  
closedir($dir);
}

?>




Immediate update: For better performance under some circumstances, change

if(is_dir($fullpath)) {
to:
if($maxdepth > $depth+1 && is_dir($fullpath)) {

This avoids unnecessary stat() calls when you're not interested in the subdirectories.

Monday, January 29, 2007

PHP Timer Class

Wrote a PHP Timer class this afternoon that might come in handy here and there for anyone developing PHP scripts. It's really simple actually, but powerful enough to be used just about anywhere you would need to measure time of script block execution.

Usage is simple:

$t1 = new Timer("mytimer");
.. (perform some code) ..
$t1->stop();
$t1->display();

You can also create any amount of timers and Timer::dump() them at the end of the script for a nice overview.

There's no better way on Blogger to post code like this, so it'll have to do for now:


<?
/*
 * Simple Timer Class
 *
 * Author: Mattias Nilsson (mrorigo@gmail.com)
 * Licenced under GPL. No warranties implied or given. 
 * Use at your own risk.
 *
 * Usage:
 * $t = new Timer("myid1");
 * ...
 * $t->stop();
 * ...
 * $t->display();
 * OR
 * Timer::dump();
 *
 */
class Timer
{
  var 
$id false;
  var 
$start false;
  var 
$stop false;
  var 
$time false;

  static 
$_TIMERS;

  
/*
   * @param $id Unique ID for your timer
   * @param $autostart Set to false to avoid timer start
   */
  
function Timer($id$autostart=true)
  {
    
$this->id $id;
    
Timer::addTimer($this);
    if(
$autostart)
      
$this->start();
  }

  
/**
   * Start the timer
   */
  
function start()
  {
    
$this->start Timer::getmicrotime();
  }

  
/**
   * Stop the timer and calculate elapsed time.
   *
   * @return Elapsed time since timer start.
   */
  
function stop()
  {
    
$this->stop Timer::getmicrotime();
    
$this->time $this->stop $this->start;
    return 
$this->time;
  }

  
/* *
   * Displays the timer result. If the timer is still running, it is stopped.
   */
  
function display()
  {
    if(!
$this->time)
      
$this->stop();
    print 
"Timer: ".$this->id.": " sprintf("%.5fs",$this->time) . "\n";
  }


  
/**
   * Adds a timer to the global timers array
   *
   * @param $timer Timer to add
   */
  
static function addTimer($timer)
  {
    
Timer::$_TIMERS[$timer->id] = $timer;
  }

  
/**
   * Dump/display all timers
   */
  
static function dump()
  {
    
reset(Timer::$_TIMERS);
    while(list(
$id$timer) = each(Timer::$_TIMERS))
      
$timer->display();
  }

  
/**
   * @return The current system milliseconds time as a float (seconds.fraction)
   */
  
static function getmicrotime()
  { 
    list(
$usec$sec) = explode(" ",microtime()); 
    return ((float)
$usec + (float)$sec); 
  }

}
?>



I killed Apache

I hereby swear to never consider Apache as a first-hand choice for installing any high-performance website again. It's simply not up for the task anymore.

I'm not going to back the previous statement up with a bunch of numbers and fancy graphs, you'll just have to take my word for it or come to this conclusion yourself after considering the options.

There was a time when Apache was one of the Greatest, and has contributed a great deal to the rapid growth of the Internet, but these days others have started over, considered every flaw and optimized for the latest kernels/IO-systems and hardware, and come up with software that does it's job in extremely efficient ways. mApache may have more features, but most of the time all you need is a fast static webserver with FastCGI-support anyway, right?

So, what's my choice? At the moment, LighTTPd outperforms anything I've ever tried, I'm yet to se it crash, and haven't missed one feature so far. Try it, be amazed and change your mind (if it's not changed already).

Memcached delivery-cache for Openads

I just submitted a patch for the Openads Project to add Memcached delivery-cache. This patch greatly increase deliver-cache performance over using the database, and almost completely eliminates SELECT queries to the database during delivery.

If you're using Openads (or PhpAdsNew 2.0.7 and above) you should be able to easily integrate this patch until it's in the main release(s).

Wednesday, January 24, 2007

File uploads with CakePHP

I wanted to upload a file and attach it to my object, but couldn't find a nice enough solution for it out there, so I created my own method, and I must say I'm quite happy with the results.

In my controller, I just added a method called encodeFile that take 2 parameters, the PHP file upload array, and a target variable reference;

   
function encodeFile($arr, &$dest)
{
switch($arr["error"]) {
case 0: // OK!
$fileData = fread(fopen($arr['tmp_name'], "r"),
$arr['size']);
if(!$fileData)
return "Could not read file from disk";
$dest = base64_encode($fileData);
break;
case UPLOAD_ERR_INI_SIZE:
case UPLOAD_ERR_FORM_SIZE:
return "The file is too big.";
case UPLOAD_ERR_PARTIAL:
case UPLOAD_ERR_NO_FILE:
return "The file was not uploaded.";
case UPLOAD_ERR_NO_TMP_DIR:
case UPLOAD_ERR_CANT_WRITE:
return "A configuration error occured";
default:
return "An unknown error occured.";
}

return true;
}


Then, in my add() method, I can use the function like this:

$ret = $this->encodeFile($this->params["form"]["File"], $this->data["MyModel"]["File"]);
if($ret !== true) {
$this->Session->setFlash('File problems: ' . $ret);
} else {
if($this->MyModel->save($this->data)) {
...

I also added a controller method to view the image:

function image($id)
{
$this->layout = "empty";
$o = $this->MyModel->read(null, $id);
Header("Content-type: image");
echo base64_decode($o["MyModel"]["File"]);
}

So, now I also have a blog

Just felt the urge to create a blog, so here it is!