Computing is full of buzzwords, “cloud computing” being the latest one. But unlike most trends that fizzle out after the initial surge, cloud computing is here to stay. This article goes over Amazon’s S3 cloud storage service and guides you to implementing a WordPress plugin that backs up your WordPress database to Amazon’s S3 cloud. Note that this is not a tutorial on creating a WordPress plugin from scratch, so some familiarity with plugin development is assumed.
The reason for using Amazon S3 to store important data follows from the “3-2-1” backup rule, coined by Peter Krogh. According to the 3-2-1 rule, you would keep three copies of any critical data: the original data, a backup copy on removable media, and a second backup at an off-site location (in our case, Amazon’s S3 cloud).
Amazon S3 is cloud-based data-storage infrastructure that is accessible to the user programmatically via a Web service API (either SOAP or REST). Using the API, the user can store various kinds of data in the S3 cloud. They can store and retrieve data from anywhere on the Web and at anytime using the API. But S3 is nothing like the file system you use on your computer. A lot of people think of S3 as a remote file system, containing a hierarchy of files and directories hosted by Amazon. Nothing could be further from the truth.
Amazon S3 is a flat-namespace storage system, devoid of any hierarchy whatsoever. Each storage container in S3 is called a “bucket,” and each bucket serves the same function as that of a directory in a normal file system. However, there is no hierarchy within a bucket (that is, you cannot create a bucket within a bucket). Each bucket allows you to store various kinds of data, ranging in size from 1 B to a whopping 5 TB (terabytes), although the largest object that can be uploaded in a single PUT request is 5 GB. Obviously, I’ve not experimented with such enormous files.
A file stored in a bucket is referred to as an object. An object is the basic unit of stored data on S3. Objects consist of data and meta data. The meta data is a set of name-value pairs that describe the object. Meta data is optional but often adds immense value, whether it’s the default meta data added by S3 (such as the date last modified) or standard HTTP meta data such as
So, what kinds of objects can you store on S3? Any kind you like. It could be a simple text file, a style sheet, programming source code, or a binary file such as an image, video or ZIP file. Each S3 object has its own URL, which you can use to access the object in a browser (if appropriate permissions are set — more on this later).
You can write the URL in two formats, which look something like this:
The bucket’s name here is deliberately simple,
Every S3 object has a unique URL, formed by concatenating the following components:
Bucket names must comply with the following requirements:
So, how much do you pay after the free period. As a rough estimate, if you stored 5 GB of data per month, with data transfers of 15 GB and 40,000
Your S3 usage is charged according to three main parameters:
Your data transfer charges are based on the amount of data uploaded and downloaded from S3. Data transferred out of S3 is charged on a sliding scale, starting at $0.12 per gigabyte and decreasing based on volume, reaching $0.050 per gigabyte for all outgoing data transfer in excess of 350 terabytes per month. Note that there is no charge for data transferred within an Amazon S3 “region” via a COPY request, and no charge for data transferred between Amazon EC2 and Amazon S3 within the same region or for data transferred between the Amazon EC2 Northern Virginia region and the Amazon S3 US standard region. To avoid surprises, always check the latest pricing policies on Amazon.
Before moving on to the coding part, let’s get acquainted with some visual tools that we can use to work with Amazon S3. Various visual and command-line tools are available to help you manage your S3 account and the data in it. Because the visual tools are easy to work with and user-friendly, we will focus on them in this article. I prefer working with the AWS Management Console for security reasons.
The Management Console is a part of the AWS. Because it is a part of your AWS account, no configuration is necessary. Once you’ve logged in, you have full access to all of your S3 data and other AWS services. You can create new buckets, create objects, apply security policies, copy objects to different buckets, and perform a multitude of other functions.
The other popular tool is S3Fox Organizer. S3Fox Organizer is a Firefox extension that enables you to upload and download files to and from your Amazon S3 account. The interface, which opens in a Firefox browser tab, looks very much like a regular FTP client with dual panes. It displays files on your PC on the left, files on S3 on the right, and status messages and information in a panel at the bottom.
In the
Modify the lines to mirror your Amazon AWS’ security credentials. You can find the credentials in your Amazon AWS account section, as shown below.
Get the keys, and fill them in on the following lines:
You can retrieve your access key and secret key from your Amazon account page:
With all of the basic requirements in place, let’s create our first bucket on Amazon S3, with a name of your choice. The following example shows a bucket by the name of
Let’s go over each line in the example above. First, we included the CloudFusion SDK class in our file. You’ll need to adjust the path depending on where you’ve stored the SDK files.
Next, we instantiated the Amazon S3 class:
In the next step, we created the actual bucket; in this case,
To reiterate, bucket names must comply with the following requirements:
Here are the permitted values for regions:
Now, let’s see how to get a list of the buckets we’ve created on S3. So, before proceeding, create a few more buckets to your liking. Once you have a few buckets in your account, it is time to list them.
The only new part in the code above is the following line, which gets an array of bucket names:
Finally, we printed out all of our buckets’ names.
This concludes our overview of creating and listing buckets in our S3 account. We also learned about S3Fox Organizer and the AWS console tools for working with your S3 account.
The first parameter is the name of the bucket in which the object will be stored. The second parameter is the name by which the file will be stored on S3. Using only these two parameters is enough to create an empty object with the given file name. For example, the following code would create an empty object named
Once the object is created, we can access it using a URL. The URL for the object above would be:
To add some content to the object at the time of creation, we can use the following code. This would add the text “Hello World” to the
As a complete example, the following code would create an object with the name
You can also upload a file, rather than just a string, as shown below. Although many options are displayed here, most have a default value and may be omitted. More information on the various options can be found in the “AWS SDK for PHP 1.4.7.”
Details on the various options will be explained in the coming sections. For now, take on faith that the above code will correctly upload a file to the S3 server.
To keep the article focused and at a reasonable length, we’ll assume that you’re familiar with WordPress plugin development. If you are a little sketchy on the fundamentals,you'll get on track quickly.
Now that we’ve successfully installed a bare-bones WordPress plugin, let’s add the meat and create a complete working system. Before we start writing the code, we should know what the admin page for the plugin will ultimately look like and what tasks the plugin will perform. This will guide us in writing the code. Here is the main settings page for our plugin:
The interface is fairly simple. The primary task of the plugin will be to back up the current WordPress database to an Amazon S3 bucket and to restore the database from the bucket. The settings page will also have a function for naming the bucket in which the backup will be stored. Also, we can specify whether the backup will be available to the public or accessible only to you.
Below is a complete outline of the plugin’s code. We will elaborate on each section in turn.
Here is the directory structure that our plugin will use:
Let’s start coding the plugin. First, we’ll initialize some variables for paths and include the CloudFusion SDK. A WordPress database can get large, so to conserve space and bandwidth, the plugin will need to compress the database before uploading it to the S3 server. To do this, we will use the
Every WordPress plugin must have its own settings page. Ours is a simple one, with a few buttons and fields. The following is the code for it, which will handle the mundane work of saving the bucket’s name, displaying the buttons, etc.
Setting up the base framework is essential if the plugin is to work correctly. So, double-check your work before proceeding.
The code is self-explanatory, but some sections need a bit of explanation. Because we want to back up the complete WordPress database, we need to somehow get ahold of the MySQL dump file for the database. There are multiple ways to do this: by using MySQL queries within WordPress to save all tables and rows of the database, or by dropping down to the shell and using
The SQL dump will obviously be big on most installations, so we’ll need to compress it before uploading it to S3 to conserve space and bandwidth. We’ll use WordPress’ built-in ZIP functions for the task. The
The
Now that we’ve taken care of the database, let’s move on to the security. As mentioned in the introduction, objects stored on S3 can be set as private (viewable only by the owner) or public (viewable by everyone). We set this option using the following code.
We have listed only two options for access (
Let’s go over each line in turn. But bear in mind that the method has quite a many options, so refer to the original documentation itself.
Here is the complete code for the restore function:
As you can see, the code for restoring is much simpler than the code for uploading. The function uses the
The details of the method’s parameters are enumerated below:
In addition to the above functions, there are some miscellaneous support functions. One is for displaying a message to the user:
Another is to add a hook for the settings page to the admin section.
The button commands for backing up and restoring are handled by this simple code:
This rounds out our tutorial on creating a plugin to integrate Amazon S3 with WordPress. Although we could have added more features to the plugin, the functionality was kept bare to maintain focus on Amazon S3’s basics, rather then on the calisthenics of plugin development.
The reason for using Amazon S3 to store important data follows from the “3-2-1” backup rule, coined by Peter Krogh. According to the 3-2-1 rule, you would keep three copies of any critical data: the original data, a backup copy on removable media, and a second backup at an off-site location (in our case, Amazon’s S3 cloud).
Cloud Computing, Concisely
Cloud computing is an umbrella term for any data or software hosted outside of your local system. Cloud computing is categorized into three main service types: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS).- Infrastructure as a service
IaaS provides virtual storage, virtual machines and other hardware resources that clients can use on a pay-per-use basis. Amazon S3, Amazon EC2 and RackSpace Cloud are examples of IaaS. - Platform as a service
PaaS provides virtual machines, application programming interfaces, frameworks and operating systems that clients can deploy for their own applications on the Web. Force.com, Google AppEngine and Windows Azure are examples of PaaS. - Software as a service
Perhaps the most common type of cloud service is SaaS. Most people use services of this type daily. SaaS provides a complete application operating environment, which the user accesses through a browser rather than a locally installed application. SalesForce.com, Gmail, Google Apps and Basecamp are some examples of SaaS.
Amazon S3 In A Nutshell
Amazon Web Services (AWS) is a bouquet of Web services offered by Amazon that together make up a cloud computing platform. The most essential and best known of these services are Amazon EC2 and Amazon S3. AWS also includes CloudFront, Simple Queue Service, SimpleDB, Elastic Block Store. In this article, we will focus exclusively on Amazon S3.Amazon S3 is cloud-based data-storage infrastructure that is accessible to the user programmatically via a Web service API (either SOAP or REST). Using the API, the user can store various kinds of data in the S3 cloud. They can store and retrieve data from anywhere on the Web and at anytime using the API. But S3 is nothing like the file system you use on your computer. A lot of people think of S3 as a remote file system, containing a hierarchy of files and directories hosted by Amazon. Nothing could be further from the truth.
Amazon S3 is a flat-namespace storage system, devoid of any hierarchy whatsoever. Each storage container in S3 is called a “bucket,” and each bucket serves the same function as that of a directory in a normal file system. However, there is no hierarchy within a bucket (that is, you cannot create a bucket within a bucket). Each bucket allows you to store various kinds of data, ranging in size from 1 B to a whopping 5 TB (terabytes), although the largest object that can be uploaded in a single PUT request is 5 GB. Obviously, I’ve not experimented with such enormous files.
A file stored in a bucket is referred to as an object. An object is the basic unit of stored data on S3. Objects consist of data and meta data. The meta data is a set of name-value pairs that describe the object. Meta data is optional but often adds immense value, whether it’s the default meta data added by S3 (such as the date last modified) or standard HTTP meta data such as
Content-Type
.So, what kinds of objects can you store on S3? Any kind you like. It could be a simple text file, a style sheet, programming source code, or a binary file such as an image, video or ZIP file. Each S3 object has its own URL, which you can use to access the object in a browser (if appropriate permissions are set — more on this later).
You can write the URL in two formats, which look something like this:
The bucket’s name here is deliberately simple,
codediesel
. It can be more complex, reflecting the structure of your application, like codediesel.wordpress.backup
or codediesel.assets.images
.Every S3 object has a unique URL, formed by concatenating the following components:
- Protocol (
http://
orhttps://
), - S3 end point (
s3.amazonaws.com
), - Bucket’s name,
- Object key, starting with
/
.
company-docs
, you cannot create a bucket with that name anywhere in the S3 namespace. Object names in a bucket, however, must be unique only to that bucket; so, two different buckets can have objects with the same name. Also, you can describe objects stored in buckets with additional information using meta data.Bucket names must comply with the following requirements:
- May contain lowercase letters, numbers, periods (
.
), underscores (_
) and hyphens (-
); - Must begin with a number or letter;
- Must be between 3 and 255 characters long;
- May not be formatted as an IP address (e.g.
265.255.5.4
).
- Backup and storage
Provide data backup and storage services. - Host applications
Provide services that deploy, install and manage Web applications. - Host media
Build a redundant, scalable and highly available infrastructure that hosts video, photo or music uploads and downloads. - Deliver software
Host your software applications that customers can download.
Amazon S3’s Pricing Model
Amazon S3 is a paid service; you need to attach a credit card to your Amazon account when signing up. But it is surprisingly low priced, and you pay only for what you use; if you use no resources in your S3 account, you pay nothing. Also, as part of the AWS “Free Usage Tier,” upon signing up, new AWS customers receive 5 GB of Amazon S3 storage, 20,000GET
requests, 2,000 PUT
requests, and 15 GB of data transfer out each month free for one year.So, how much do you pay after the free period. As a rough estimate, if you stored 5 GB of data per month, with data transfers of 15 GB and 40,000
GET
and PUT
requests a month, the cost would be around $2.60 per month. That’s lower than the cost of a burger — inexpensive by any standard. The prices may change, so use the calculator on the S3 website.Your S3 usage is charged according to three main parameters:
- The total amount of data stored,
- The total amount of data transferred in and out of S3 per month,
- The number of requests made to S3 per month.
Your data transfer charges are based on the amount of data uploaded and downloaded from S3. Data transferred out of S3 is charged on a sliding scale, starting at $0.12 per gigabyte and decreasing based on volume, reaching $0.050 per gigabyte for all outgoing data transfer in excess of 350 terabytes per month. Note that there is no charge for data transferred within an Amazon S3 “region” via a COPY request, and no charge for data transferred between Amazon EC2 and Amazon S3 within the same region or for data transferred between the Amazon EC2 Northern Virginia region and the Amazon S3 US standard region. To avoid surprises, always check the latest pricing policies on Amazon.
Introduction To The Amazon S3 API And CloudFusion
Now with the theory behind us, let’s get to the fun part: writing code. But before that, you will need to register with S3 and create an AWS account. If you don’t already have one, you’ll be prompted to create one when you sign up for Amazon S3.Before moving on to the coding part, let’s get acquainted with some visual tools that we can use to work with Amazon S3. Various visual and command-line tools are available to help you manage your S3 account and the data in it. Because the visual tools are easy to work with and user-friendly, we will focus on them in this article. I prefer working with the AWS Management Console for security reasons.
AWS Management Console
The Management Console is a part of the AWS. Because it is a part of your AWS account, no configuration is necessary. Once you’ve logged in, you have full access to all of your S3 data and other AWS services. You can create new buckets, create objects, apply security policies, copy objects to different buckets, and perform a multitude of other functions.
S3Fox Organizer
The other popular tool is S3Fox Organizer. S3Fox Organizer is a Firefox extension that enables you to upload and download files to and from your Amazon S3 account. The interface, which opens in a Firefox browser tab, looks very much like a regular FTP client with dual panes. It displays files on your PC on the left, files on S3 on the right, and status messages and information in a panel at the bottom.
Onto The Coding
As stated earlier, AWS is Amazon’s Web service infrastructure that encompasses various cloud services, including S3, EC2, SimpleDB and CloudFront. Integrating these varied services can be a daunting task. Thankfully, we have at our disposal an SDK library in the form of CloudFusion, which enables us to work with AWS effortlessly. CloudFusion is now the official AWS SDK for PHP, and it encompasses most of Amazon’s cloud products: S3, EC2, SimpleDB, CloudFront and many more. For this post, I downloaded the ZIP version of the CloudFusion SDK, but the library is also available as a PEAR package. So, go ahead: download the latest version from the official website, and extract the ZIP to your working directory or to your PHPinclude
path. In the extracted directory, you will find the config-sample.inc.php
file, which you should rename to config.inc.php
. You will need to make some changes to the file to reflect your AWS credentials.In the
config
file, locate the following lines:1 | define( 'AWS_KEY' , '' ); |
2 | define( 'AWS_SECRET_KEY' , '' ); |
Get the keys, and fill them in on the following lines:
1 | define( 'AWS_KEY' , 'your_access_key_id' ); |
2 | define( 'AWS_SECRET_KEY' , 'your_secret_access_key' ); |
With all of the basic requirements in place, let’s create our first bucket on Amazon S3, with a name of your choice. The following example shows a bucket by the name of
com.smashingmagazine.images
. (Of course, by the time you read this, this name may have already be taken.) Choose a structure for your bucket’s name that is relevant to your work. For each bucket, you can control access to the bucket, view access logs for the bucket and its objects, and set the geographical region where Amazon S3 will store the bucket and its contents.01 | /* Include the CloudFusion SDK class */ |
02 | require_once ( ‘sdk-1.4.4/sdk. class .php'); |
03 |
04 | /* Our bucket name */ |
05 | $bucket = 'com.smashingmagazine.images’; |
06 |
07 | /* Initialize the class */ |
08 | $s3 = new AmazonS3(); |
09 |
10 | /* Create a new bucket */ |
11 | $resource = $s3 ->create_bucket( $bucket , AmazonS3::REGION_US_E1); |
12 |
13 | /* Check if the bucket was successfully created */ |
14 | if ( $resource ->isOK()) { |
15 | print( "'${bucket}' bucket created\n" ); |
16 | } else { |
17 | print( "Error creating bucket '${bucket}'\n" ); |
18 | } |
1 | require_once ( 'sdk-1.4.4/sdk.class.php' ); |
1 | $s3 = new AmazonS3(); |
com.smashingmagazine.images
. Again, your bucket’s name must be unique across all existing bucket names in Amazon S3. One way to ensure this is to prefix a word with your company’s name or domain, as we’ve done here. But this does not guarantee that the name will be available. Nothing prevents anyone from creating a bucket named com.microsoft.apps
or com.ibm.images
, so choose wisely.1 | $bucket = 'com.smashingmagazine.images’; |
2 | $resource = $s3 ->create_bucket( $bucket , AmazonS3::REGION_US_E1); |
- May contain lowercase letters, numbers, periods (
.
), underscores (_
) and hyphens (-
); - Must start with a number or letter;
- Must be between 3 and 255 characters long;
- May not be formatted as an IP address (e.g.
265.255.5.4
).
REGION_US_E1
region.Here are the permitted values for regions:
AmazonS3::REGION_US_E1
AmazonS3::REGION_US_W1
AmazonS3::REGION_EU_W1
AmazonS3::REGION_APAC_SE1
AmazonS3::REGION_APAC_NE1
1 | if ( $resource ->isOK()) { |
2 | print( "'${bucket}' bucket created\n" ); |
3 | } else { |
4 | print( "Error creating bucket '${bucket}'\n" ); |
5 | } |
01 | /* Include the CloudFusion SDK class */ |
02 | require_once ( 'sdk-1.4.4/sdk.class.php' ); |
03 |
04 | /* Our bucket name */ |
05 | $bucket = 'com.smashingmagazine.images; |
06 |
07 | /* Initialize the class */ |
08 | $s3 = new AmazonS3(); |
09 |
10 | /* Get a list of buckets */ |
11 | $buckets = $s3 ->get_bucket_list(); |
12 |
13 | if ( $buckets ) { |
14 | foreach ( $buckets as $b ) { |
15 | echo $b . "\n" ; |
16 | } |
17 | } |
1 | $buckets = $s3 ->get_bucket_list(); |
1 | if ( $buckets ) { |
2 | foreach ( $buckets as $b ) { |
3 | echo $b . "\n" ; |
4 | } |
5 | } |
Uploading Data To Amazon S3
Now that we’ve learned how to create and list buckets in S3, let’s figure out how to put objects into buckets. This is a little complex, and we have a variety of options to choose from. The main method for doing this iscreate_object
. The method takes the following format:1 | create_object ( $bucket , $filename , [ $opt = null ] ) |
config-empty.inc
in the com.magazine.resources
bucket:1 | $s3 = new AmazonS3(); |
2 | $bucket = 'com.magazine.resources' ; |
3 | $response = $s3 ->create_object( $bucket , 'config-empty.inc' ); |
4 |
5 | // Success? |
6 | var_dump( $response ->isOK()); |
https://s3.amazonaws.com/com.magazine.resources/config-empty.inc
Of course, if you tried to access the URL from a browser, you would be greeted with an “Access denied” message, because objects stored on S3 are set to private by default, viewable only by the owner. You have to explicitly make an object public (more on that later).To add some content to the object at the time of creation, we can use the following code. This would add the text “Hello World” to the
config-empty.inc
file.1 | $response = $s3 ->create_object( $bucket , config- empty .inc ‘, |
2 | array ( |
3 | 'body' => Hello World!' |
4 | )); |
simple.txt
, along with some content, and save it in the given bucket. An object may also optionally contain meta data that describes that object.01 | /* Initialize the class */ |
02 | $s3 = new AmazonS3(); |
03 |
04 | /* Our bucket name */ |
05 | $bucket = 'com.magazine.resources’; |
06 |
07 | $response = $s3 ->create_object( $bucket , 'simple.txt' , |
08 | array ( |
09 | 'body' => Hello World!' |
10 | )); |
11 |
12 | if ( $response ->isOK()) |
13 | { |
14 | return true; |
15 | } |
01 | require_once ( ‘sdk-1.4.4/sdk. class .php'); |
02 |
03 | $s3 = new AmazonS3(); |
04 | $bucket = 'com.smashingmagazine.images’; |
05 |
06 | $response = $s3 ->create_object( $bucket , 'source.php' , |
07 | array ( |
08 | 'fileUpload' => 'test.php' , |
09 | 'acl' => AmazonS3::ACL_PRIVATE, |
10 | 'contentType' => 'text/plain' , |
11 | 'storage' => AmazonS3::STORAGE_REDUCED, |
12 | 'headers' => array ( // raw headers |
13 | 'Cache-Control' => 'max-age' , |
14 | 'Content-Encoding' => 'text/plain' , |
15 | 'Content-Language' => 'en-US' , |
16 | 'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT' , |
17 | ) |
18 | )); |
19 |
20 | // Success? |
21 | var_dump( $response ->isOK()); |
Writing Our Amazon S3 WordPress Plugin
With some background on Amazon S3 behind us, it is time to put our learning into practice. We are ready to build a WordPress plugin that will automatically back up our WordPress database to the S3 server and restore it when needed.To keep the article focused and at a reasonable length, we’ll assume that you’re familiar with WordPress plugin development. If you are a little sketchy on the fundamentals,you'll get on track quickly.
The Plugin’s Framework
We’ll first create a skeleton and then gradually fill in the details. To create a plugin, navigate to thewp-content/plugins
folder, and create a new folder named s3-backup
. In the new folder, create a file named s3-backup.php
. Open the file in the editor of your choice, and paste the following header information, which will describe the plugin for WordPress:/* Plugin Name: Amazon S3 Backup Plugin URI: http://cloud-computing-rocks.com Description: Plugin to back up WordPress database to Amazon S3 Version: 1.0 Author: Mr. Sameer Author URI: http://www.codediesel.com License: GPL2 */Once that’s done, go to the plugin’s page in the admin area, and activate the plugin.
Now that we’ve successfully installed a bare-bones WordPress plugin, let’s add the meat and create a complete working system. Before we start writing the code, we should know what the admin page for the plugin will ultimately look like and what tasks the plugin will perform. This will guide us in writing the code. Here is the main settings page for our plugin:
The interface is fairly simple. The primary task of the plugin will be to back up the current WordPress database to an Amazon S3 bucket and to restore the database from the bucket. The settings page will also have a function for naming the bucket in which the backup will be stored. Also, we can specify whether the backup will be available to the public or accessible only to you.
Below is a complete outline of the plugin’s code. We will elaborate on each section in turn.
01 | /* |
02 | Plugin Name: Amazon S3 Backup |
04 | Description: Plugin to back up WordPress database to Amazon S3 |
05 | Version: 1.0 |
06 | Author: Mr. Sameer |
07 | Author URI: http://www.codediesel.com |
08 | License: GPL2 |
09 | */ |
10 |
11 | $plugin_path = WP_PLUGIN_DIR . "/" . dirname(plugin_basename( __FILE__ )); |
12 |
13 | /* CloudFusion SDK */ |
14 | require_once ( $plugin_path . '/sdk-1.4.4/sdk.class.php' ); |
15 |
16 | /* WordPress ZIP support library */ |
17 | require_once (ABSPATH . '/wp-admin/includes/class-pclzip.php' ); |
18 |
19 | add_action( 'admin_menu' , 'add_settings_page' ); |
20 |
21 | /* Save or Restore Database backup */ |
22 | if (isset( $_POST [ 'aws-s3-backup' ])) { |
23 | … |
24 | } |
25 |
26 | /* Generic Message display */ |
27 | function showMessage( $message , $errormsg = false) { |
28 | … |
29 | } |
30 |
31 | /* Back up WordPress database to an Amazon S3 bucket */ |
32 | function backup_to_AmazonS3() { |
33 | … |
34 | } |
35 |
36 | /* Restore WordPress backup from an Amazon S3 bucket */ |
37 | function restore_from_AmazonS3() { |
38 | … |
39 | } |
40 |
41 | function add_settings_page() { |
42 | … |
43 | } |
44 |
45 | function draw_settings_page() { |
46 | … |
47 | } |
1 | plugins (WordPress plugin directory) |
2 | ---s3-backup (our plugin directory) |
3 | -------s3backup (restored backup will be stored in this directory) |
4 | -------sdk-1.4.4 (CloudFusion SDK directory) |
5 | -------s3-backup.php (our plugin source code) |
class-pclzip.php
ZIP compression support library, which is built into WordPress. Finally, we’ll hook the settings page to the admin menu.01 | $plugin_path = WP_PLUGIN_DIR . "/" . dirname(plugin_basename( __FILE__ )); |
02 |
03 | /* CloudFusion SDK */ |
04 | require_once ( $plugin_path . '/sdk-1.4.4/sdk.class.php' ); |
05 |
06 | /* WordPress ZIP support library */ |
07 | require_once (ABSPATH . '/wp-admin/includes/class-pclzip.php' ); |
08 |
09 | /* Create the admin settings page for our plugin */ |
10 | add_action( 'admin_menu' , 'add_settings_page' ); |
01 | function draw_settings_page() { |
02 | ?> |
03 |
|
04 |
echo ( 'WordPress Database Amazon S3 Backup' ) ?> |
05 |
|
06 |
|
07 |
|
08 | wp_nonce_field( 'update-options' ); |
09 | $access_options = get_option( 'aws-s3-access-public' ); |
10 | ?> |
11 |
|
12 |
|
13 |
|
14 |
|
15 |
|
16 |
|
17 |
|
18 |
|
19 |
|
20 |
|
21 |
|
22 |
|
23 |
|
24 |
|
25 |
|
26 |
|
27 |
|
28 |
|
29 |
|
30 |
|
31 |
|
32 | } |
Database Upload
Next is the main part of the plugin, its raison d’être: the function for backing up the database to the S3 bucket.01 | /* Back up WordPress database to an Amazon S3 bucket */ |
02 | function backup_to_AmazonS3() |
03 | { |
04 | global $wpdb , $plugin_path ; |
05 |
06 | /* Backup file name */ |
07 | $backup_zip_file = 'aws-s3-database-backup.zip' ; |
08 |
09 | /* Temporary directory and file name where the backup file will be stored */ |
10 | $backup_file = $plugin_path . "/s3backup/aws-s3-database-backup.sql" ; |
11 |
12 | /* Complete path to the compressed backup file */ |
13 | $backup_compressed = $plugin_path . "/s3backup/" . $backup_zip_file ; |
14 |
15 | $tables = $wpdb ->get_col( "SHOW TABLES LIKE '" . $wpdb ->prefix . "%'" ); |
16 | $result = shell_exec( 'mysqldump --single-transaction -h ' . |
17 | DB_HOST . ' -u ' . DB_USER . ' --password="' . |
18 | DB_PASSWORD . '" ' . |
19 | DB_NAME . ' ' . implode( ' ' , $tables ) . ' > ' . |
20 | $backup_file ); |
21 |
22 | $backups [] = $backup_file ; |
23 |
24 | /* Create a ZIP file of the SQL backup */ |
25 | $zip = new PclZip( $backup_compressed ); |
26 | $zip ->create( $backups ); |
27 |
28 | /* Connect to Amazon S3 to upload the ZIP */ |
29 | $s3 = new AmazonS3(); |
30 | $bucket = get_option( 'aws-s3-access-bucket' ); |
31 |
32 | /* Check if a bucket name is specified */ |
33 | if ( empty ( $bucket )) { |
34 | showMessage( "No Bucket specified!" , true); |
35 | return ; |
36 | } |
37 |
38 | /* Set backup public options */ |
39 | $access_options = get_option( 'aws-s3-access-public' ); |
40 |
41 | if ( $access_options ) { |
42 | $access = AmazonS3::ACL_PUBLIC; |
43 | } else { |
44 | $access = AmazonS3::ACL_PRIVATE; |
45 | } |
46 |
47 | /* Upload the database itself */ |
48 | $response = $s3 ->create_object( $bucket , $backup_zip_file , |
49 | array ( |
50 | 'fileUpload' => $backup_compressed , |
51 | 'acl' => $access , |
52 | 'contentType' => 'application/zip' , |
53 | 'encryption' => 'AES256' , |
54 | 'storage' => AmazonS3::STORAGE_REDUCED, |
55 | 'headers' => array ( // raw headers |
56 | 'Cache-Control' => 'max-age' , |
57 | 'Content-Encoding' => 'application/zip' , |
58 | 'Content-Language' => 'en-US' , |
59 | 'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT' , |
60 | ) |
61 | )); |
62 |
63 | if ( $response ->isOK()) { |
64 | unlink( $backup_compressed ); |
65 | unlink( $backup_file ); |
66 | showMessage( "Database successfully backed up to Amazon S3." ); |
67 | } else { |
68 | showMessage( "Error connecting to Amazon S3" , true); |
69 | } |
70 | } |
mysqldump
. We will use the second method. The code for the database dump shown below uses the shell_exec
function to run the mysqldump
command to grab the WordPress database dump. The dump is further saved to the aws-s3-database-backup.sql
file.1 | $tables = $wpdb ->get_col( "SHOW TABLES LIKE '" . $wpdb ->prefix . "%'" ); |
2 | $result = shell_exec( 'mysqldump --single-transaction -h ' . |
3 | DB_HOST . ' -u ' . DB_USER . |
4 | ' --password="' . DB_PASSWORD . '" ' . |
5 | DB_NAME . ' ' . implode( ' ' , $tables ) . |
6 | ' > ' . $backup_file ); |
7 |
8 | $backups [] = $backup_file ; |
PclZip
class is stored in the /wp-admin/includes/class-pclzip.php
file, which we have included at the start of the plugin. The aws-s3-database-backup.zip
file is the final ZIP file that will be uploaded to the S3 bucket. The following lines will create the required ZIP file.1 | /* Create a ZIP file of the SQL backup */ |
2 | $zip = new PclZip( $backup_compressed ); |
3 | $zip ->create( $backups ); |
PclZip
constructor takes a file name as an input parameter; aws-s3-database-backup.zip
, in this case. And to the create
method we pass an array of files that we want to compress; we have only one file to compress, aws-s3-database-backup.sql
.Now that we’ve taken care of the database, let’s move on to the security. As mentioned in the introduction, objects stored on S3 can be set as private (viewable only by the owner) or public (viewable by everyone). We set this option using the following code.
1 | /* Set backup public options */ |
2 | $access_options = get_option( 'aws-s3-access-public' ); |
3 |
4 | if ( $access_options ) { |
5 | $access = AmazonS3::ACL_PUBLIC; |
6 | } else { |
7 | $access = AmazonS3::ACL_PRIVATE; |
8 | } |
AmazonS3::ACL_PUBLIC
and AmazonS3::ACL_PRIVATE
), but there are a couple more, as listed below and the details of which you can find in the Amazon SDK documentation.AmazonS3::ACL_PRIVATE
AmazonS3::ACL_PUBLIC
AmazonS3::ACL_OPEN
AmazonS3::ACL_AUTH_READ
AmazonS3::ACL_OWNER_READ
AmazonS3::ACL_OWNER_FULL_CONTROL
create_object
method of the S3 class to perform the upload. We got a short glimpse of the method in the last section.01 | /* Upload the database itself */ |
02 | $response = $s3 ->create_object( $bucket , $backup_zip_file , array ( |
03 | 'fileUpload' => $backup_compressed , |
04 | 'acl' => $access , |
05 | 'contentType' => 'application/zip' , |
06 | 'encryption' => 'AES256' , |
07 | 'storage' => AmazonS3::STORAGE_REDUCED, |
08 | 'headers' => array ( // raw headers |
09 | 'Cache-Control' => 'max-age' , |
10 | 'Content-Encoding' => 'application/zip' , |
11 | 'Content-Language' => 'en-US' , |
12 | 'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT' , |
13 | ) |
14 | )); |
$backup_zip_file
The name of the object that will be created on S3.'fileUpload' => $backup_compressed
The name of the file, whose data will be uploaded to the server. In our case,aws-s3-database-backup.zip
.'acl' => $access
The access type for the object. In our case, either public or private.'contentType' => 'application/zip'
The type of content that is being sent in the body. If a file is being uploaded viafileUpload
, as in our case, it will attempt to determine the correct MIME type based on the file’s extension. The default value isapplication/octet-stream
.'encryption' => 'AES256'
The algorithm to use for encrypting the object. (Allowed values:AES256
)'storage' => AmazonS3::STORAGE_REDUCED
Specifies whether to use “standard” or “reduced redundancy” storage. Allowed values areAmazonS3::STORAGE_STANDARD
andAmazonS3::STORAGE_REDUCED
. The default value isSTORAGE_STANDARD
.'headers' => array( // raw headers
'Cache-Control' => 'max-age',
'Content-Encoding' => 'application/zip',
'Content-Language' => 'en-US',
'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT',
)
The standard HTTP headers to send along with the request. These are optional.
Database Restore
Merely being able to back up data is insufficient. We also need to be able to restore it when the need arises. In this section, we’ll lay out the code for restoring the database from S3. When we say “restore,” keep in mind that the database’s ZIP file from S3 will simply be downloaded to the specified folder in our plugin directory. The actual database on our WordPress server is not changed in any way; you will have to restore the database yourself manually. We could have equipped our plugin to also auto-restore, but that would have made the code a lot more complex.Here is the complete code for the restore function:
01 | /* Restore WordPress backup from an Amazon S3 bucket */ |
02 | function restore_from_AmazonS3() |
03 | { |
04 | global $plugin_path ; |
05 |
06 | /* Backup file name */ |
07 | $backup_zip_file = 'aws-s3-database-backup.zip' ; |
08 |
09 | /* Complete path to the compressed backup file */ |
10 | $backup_compressed = $plugin_path . "/s3backup/" . $backup_zip_file ; |
11 |
12 | $s3 = new AmazonS3(); |
13 | $bucket = get_option( 'aws-s3-access-bucket' ); |
14 |
15 | if ( empty ( $bucket )) { |
16 | showMessage( "No Bucket specified!" , true); |
17 | return ; |
18 | } |
19 |
20 | $response = $s3 ->get_object( $bucket , $backup_zip_file , array ( |
21 | 'fileDownload' => $backup_compressed |
22 | )); |
23 |
24 | if ( $response ->isOK()) { |
25 | showMessage( "Database successfully restored from Amazon S3." ); |
26 | } else { |
27 | showMessage( "Error connecting to Amazon S3" , true); |
28 | } |
29 |
30 | } |
get_object
method of the SDK, the definition of which is as follows:1 | get_object ( $bucket , $filename , [ $opt = null ] ) |
$bucket
The name of the bucket where the backup file is stored. The bucket’s name is stored in our WordPress settings variableaws-s3-access-bucket
, which we retrieve using theget_option('aws-s3-access-bucket')
function.$backup_zip_file
The file name of the backup object. In our case,aws-s3-database-backup.zip
.'fileDownload' => $backup_compressed
The file system location to download the file to, or an open file resource. In our case, thes3backup
directory in our plugin folder. It must be a server-writable location.
In addition to the above functions, there are some miscellaneous support functions. One is for displaying a message to the user:
01 | /* Generic message display */ |
02 | function showMessage( $message , $errormsg = false) { |
03 | if ( $errormsg ) { |
04 | echo ' ' ; |
05 | } else { |
06 | echo ' ' ; |
07 | } |
08 |
09 | echo "$message ; |
10 | } |
1 | function add_settings_page() { |
2 | add_options_page( 'Amazon S3 Backup' , 'Amazon S3 Backup' ,8, |
3 | 's3-backup' , 'draw_settings_page' ); |
4 | } |
1 | /* Save or Restore Database backup */ |
2 | if (isset( $_POST [ 'aws-s3-backup' ])) { |
3 | backup_to_AmazonS3(); |
4 | } elseif (isset( $_POST [ 'aws-s3-restore' ])) { |
5 | restore_from_AmazonS3(); |
6 | } |
No comments:
Post a Comment