Amazon S3 PHP Class

This class is a standalone Amazon S3 REST implementation for PHP 5.2.x (using CURL), that supports large file uploads and doesn’t require PEAR.

Download source: (view changelog)

Usage: See the class documentation and example.php in the source distribution.

Known Issues:

  • Files larger than 2GB are not supported on 32 bit systems due to PHP’s signed integer problem
  • SSL is enabled by default and can cause problems with large files. If you don’t need SSL, disable it with S3::$useSSL = false;

More Information:

NOTE ON FOLDERS: Amazon S3 does not support folders.  Clients like S3Fox create specific files that are displayed as folders.  Just use slash paths for your object names (foo/bar.txt) and (foo/) as your prefix when listing contents.

Amazon S3™ is a trademark of, Inc. or its affiliates.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

306 Responses to Amazon S3 PHP Class

  1. Jimmy Ho says:

    hi I really like your s3 php class, very clean abstracted code. Do you have any ideas on streaming? instead of put, I saw a couple different sites, with examples, like, this one,

    Please let me know if you enhance your class to do this

  2. Scott says:

    How would one go about uploading a folder? Apart from putObjectFile into a folder subdirectory? Is there a way to take a whole folder, its files, and subfolders at once?

  3. puzz says:

    Just downloaded and looked at your source, and I like it… The example.php works perfectly. Thank you very much!

  4. Hans says:

    Hey! Awesome script, very useful. I miss a couple of things though:

    A listBuckets function, to check which buckets exists.

    A checkObject function that checks if an object with the (file)name exists, without having to get a whole object.

    Maybe these things can be done allready, but I can’t see it. Thanks!

  5. Todd says:

    thanks for the class, been trying to find one that didn’t use pear. New function suggestion would be a getBuckets() function to get a listing of all the buckets you have created.

  6. Don says:

    @Jimmy: Streaming like you mentioned probably is possible, will have a look at it sometime.

    @Hans: There’s the listBuckets() function you wanted. I’m not sure if a checkObject() could be implemented.

    @Scott: You’d need to iterate through the directory uploading each file.

    @Todd: Yeah, the PEAR requirement is what made me write this class. listBuckets() there for you too!

    Hope you all find it useful. If you use it for something cool, drop me a line and let me know!

  7. Todd says:

    RE: checkObject()
    there is a function in the api called HEAD that will fetch the metadata of a file. (Using Rest API > Operations on objects > HEAD object) That may give you what you need as it will give a file not found error if the name doesn’t exist.

    And Don, thanks for the update

  8. Hans says:

    @Don: The timeouts occur with putObjectFile(). Have you tried the script on a shared hosting space? It might be something there..

    Is there someway I could debug this?

    Thanks for your support!

  9. Great work… lean and mean. I echo what everyone else said about the PEAR bloat.

    But… you didn’t mention that it needed Simple XML. Worse, you used an @ in the call: @simplexml_load_string($data[1]) : $data[1];

    (Backslashes are because your web site want to make superscripts from the array indexes.)

    Result was absolute silence.


    But, it’s such a great piece of work that I forgive you.


  10. Raam Dev says:

    Great work! I use this script in some of my personal projects and it works great.

    One of my projects required the use of PHP4 without any PEAR packages and figuring out how to create HMAC signatures was difficult. In case anyone else needs to do this, I documented my results here:

  11. Edu says:

    Which is the best method for uploading a file from a form?

    Put an object from a resource? from a file? from a string?


    The file is uploaded first to my server using multipart form data.

  12. Ren says:

    Great class. Love the putObjectFile method. I used to create an object, check for type and upload the object… this method does it all. Very well written. Thank you!!

  13. SJ says:

    Hi, I must thank you for this because Amazon’s documentation is useless, I was up and running with this in no time. You have been a huge help!

    I’m having a problem though which I was wondering if you had experienced, the script hangs when uploading very small objects, such as a 1×1 spacer gif, or a browser-dependent stylesheet with just a line or two. The file does appear in the bucket, but then it dies.

    I added a CURL timeout which seemed sensible anyway but to no avail.

  14. malone says:

    Great class, it’s really saved me a lot of time.

    But I was getting [SignatureDoesNotMatch] errors when I sent certain metadata. The docs say you have to “Sort the collection of headers lexicographically by header name” before signing them. So I added “sort($amz);” before line 791 of S3.php, and that fixed the problem.

  15. Heron says:

    Works beautifully, many thanks 🙂

    How hard would it be to add in a parameter to putBucket to specify the location of the bucket? Right now the only possibility is EU (for Europe), with US being the default if you don’t specify (or does ‘US’ work as a location?), but it doesn’t hurt to plan ahead.

    I’ve only taken a cursory glance at the guts of S3 and S3Request so I’m not sure where you’d have to integrate this. If you were using PEAR/HTTP_Request, you’d use the addPutData method of HTTP_Request to add the appropriate snippet of XML to the put bucket request.

    I’ll look into it more tomorrow and see what I can figure out, but I thought I’d see if you know offhand.

  16. Heron says:

    I’ve figured out how to specify Europe as the new bucket location, so I’ll tell you what I’ve added to make it work.

    First I added an array, $extradata, as another private member variable of S3Request. Then I added a method addExtraData($data) similar in spirit to setHeader.

    Next I added the following to the else statement in the ‘PUT’ case of the ‘request types’ switch in getResponse(), just after the CURLOPT_CUSTOMREQUEST line: [Sorry, the formatting doesn’t work well here.]

    $strdata = ‘’;
    foreach($extradata as $data)
    $strdata .= $data;

    if(strlen($strdata) > 0)
    $temp = tmpfile();
    fwrite($temp, $strdata);
    curl_setopt($curl, CURLOPT_INFILE, $temp);
    curl_setopt($curl, CURLOPT_INFILESIZE, strlen($strdata));

    Finally, I added a third parameter to S3::putBucket, $europe, which defaults to false. Just before calling $rest->getResponse(), I do:

    $data = ‘<CreateBucketConfiguration>n’;
    $data .= ‘ <LocationConstraint>EU</LocationConstraint>n’;
    $data .= ‘</CreateBucketConfiguration>’;

    Next I plan on seeing whether I can make listBuckets’ detailed return indicate whether a bucket is in Europe or not. I’ll let you know if/when I do.

    Hope this helps.

  17. Heron says:

    You are correct, there is unfortunately no way to get this information in listBuckets.

    The changelog says the class supports the ?location and ?torrent parameters. How do I use them?

    Specifically I’m trying to figure out the location of a bucket. The request looks like this, according to Amazon:

    GET /?location HTTP/1.1
    Date: Tue, 09 Oct 2007 20:26:04 +0000
    Authorization: AWS 1ATXQ3HHA59CYF1CVS02:JUtd9kkJFjbKbkP9f6T/tAxozYY=

    How do I get this set up with your s3 class? The getObjectInfo function doesn’t work, as it sends a HEAD request rather than a GET request as specified by Amazon. I think I’ll write another function to get this information until you have a moment to show me how to do it.

  18. Don says:

    @Heron: I’ve added a getBucketLocation() method and a 3rd location parameter to putBucket().

    But you’re right the getObjectInfo() HEAD request wouldn’t work for ?location.

  19. Heron says:

    Awesome, that’s perfect.

    Maybe next (if you’re open to suggestions or requests) you could add copy and rename methods for objects. Amazon lets you issue a COPY request to copy a file from one bucket+filename to another bucket+filename; that would make it easy for me to maintain European mirrors of my US data without having to upload the data twice.

    You’d have to implement rename as a COPY (to the new file name, same bucket) and a DELETE (old file name).

  20. Don says:

    @Heron: Added copyObject() 🙂 Didn’t even know that was possible, thanks for the heads up!

  21. Heron says:

    Awesome 🙂 I’ll try it out at work tomorrow.

  22. Heron says:

    I used copyObject to implement a ‘rename’ function in my little interface, since I don’t actually need to be able to copy objects right now.

    Works like a charm 🙂

  23. Ronny says:

    Nice work. Works very well. You made my day! ;]

  24. Heron says:


    I need to be able to modify the access permissions of an existing bucket. Something like $s3->modifyBucketPermissions($bucket, $permission) where $permission is public, public-read, or private. It might be useful to have a function to obtain the current permission settings for a given bucket as well. It’ll be far faster to wait for you to add this than to move the data elsewhere, recreate the bucket, and move the data back…

    Here’s the documentation link for it:

  25. Don says:

    @Heron: You can do this:

    $acp = $s3->getAccessControlPolicy($bucketName);

    // Here you would modify $acp

    // Then update it
    $s3->setAccessControlPolicy($bucketName, ‘’, $acp);

  26. Heron says:


    Are you uploading the file to your web server and then uploading it from there to S3, or are you uploading the file directly from your computer to S3?


    $contents = $s3->getBucket($bucketName);

    That will put the contents of $bucketName in $contents. You can then loop through $contents yourself and filter out the ones you don’t want. Read through example.php if you need to see how to use getBucket().

  27. Heron says:

    @Ben: You might try the getObjectInfo() function. Pass it the bucket and uri; if it returns false, the object doesn’t exist.

    if($s3->getObjectInfo($bucket, $objectname, false))
    { echo ‘anchor tag here’; }
    { echo ‘no such file’; }

    The third parameter is important; you’ll get a bunch of object info instead of true/false if you don’t pass false for the third parameter.

  28. Heron says:

    @johan: I ran into that same problem. You need to urlencode the object name in the delete request. For example:

    $s3->deleteObject($bucketname, urlencode($objectname));

  29. Chuck says:

    Same problem as Cameron #82. I used “S3 Firefox Organizer” to upload a bunch of test files to S3. I can use $s3->getBucket in a test script to obtain a list of the files and related data, all OK. But use of $s3->getObjectInfo on the same bucket and a specific object fails with “505 Unexpected HTTP status” error. The line in S3.php named is in the method code.

    Thank you for this excellent module and I appreciate any help.


  30. Heron says:

    @Chuck: What is the name of the object you’re trying to get info on? Does it have a space or other symbols in the name? If so you might try using urlencode() on the object name before passing it to getObjectInfo().

  31. Chuck says:

    Ah, you are the man! I used your idea to urlencode the filename and my test code now works.

    Thank you very much for the prompt and effective response.


  32. I’ve found the need to add some code to add Cache-Control header for my puts to S3. I think others would find this functionality useful as well. For my usage, I set the Cache-control so static files are not repeatedly downloaded from S3. Please let me know if/how you would like my code addition (e.g. diff, zip of s3.php or just the function).

    Thanks for a great library!

  33. Don says:

    @Angelo: I’ve added some changes for request headers. Thanks for providing the info.

  34. Christo says:

    I added the delimiter field in for getBucket():
    public static function getBucket($bucket, $prefix = null, $marker = null, $maxKeys = null, $delimiter = ‘/’)


    $rest = new S3Request(‘GET’, $bucket, ‘’);

    if ($prefix !== null && $prefix !== ‘’) $rest->setParameter(‘prefix’, $prefix);

    if ($marker !== null && $marker !== ‘’) $rest->setParameter(‘marker’, $marker);

    if ($maxKeys !== null && $maxKeys !== ‘’) $rest->setParameter(‘max-keys’, $maxKeys);

    if ($delimiter !== null && $delimiter !== ‘’) $rest->setParameter(‘delimiter’, $delimiter);


  35. Jed Wood says:

    Just checked out the latest version (have never used this class before) and I’m getting an error of curl_init not defined. Any ideas?

  36. Don says:

    @Jed: You need the CURL extension.

  37. Memphis says:

    Looks like it does not work with file with space in its name.
    It says:
    Warning: S3::copyObject(mybucket1, blz21 – 22_t.jpg, mybucket2, blz21 – 22_t.jpg): [505] Unexpected HTTP status in /home/username/S3.php on line 446

    Is it a bug or i have to encode the file name ?

  38. Heron says:

    If you’d read previous posts, Memphis, you’d know the solution: You need to urlencode the object name before sending it.

  39. Memphis says:

    Thanks, it works well. Do you know how to use getBucket() to list a part of objects in a bucket. For example I have 10000+ objects in my bucket and I want to list the first 1000 and then the next 1000 objects … in alphabetical order ? The $maxKeys param does not work for me in that case

  40. Heron says:

    Memphis: If I remember right, it gives you the files from A-Z, 0-9, then a-z in that order. That’s not exactly alphabetical; it’s ordered by ASCII code. If you want it to be in case-insensitive alphabetic order, you’ll need to grab all the keys yourself, sort them, then cache them locally before displaying them to the user.

  41. Mike says:

    Is there any chance to add curl_multi functionality so that we can send requests in parallel?

  42. Don says:

    @Mike: You could.. But unfortunately that functionality will not be added anytime soon.

  43. Randy says:

    getObject method seems to be capable of writing object to a resource. but how can I make the process in which server side php requests S3 object with authentication and client user can download the object from browser?

  44. Heron says:

    Randy: I assume you’re asking because you’ve made your files private. You can generate a temporary download link using this method:

    Scroll down to the part labelled “Query String Request Authentication Alternative” near the bottom. Using this method, you can give your users a controlled way to download private objects from your buckets.

    This thread goes in to more detail:

  45. profes says:

    Hi, only small update to your S3.php class (in function inputFile), you can use
    [ — cut — ]
    if ( !is_readable($file) || !is_file($file) ) {
    [ — cut — ]

    instead of

    [ — cut — ]
    if (!file_exists($file) || !is_file($file) || !is_readable($file)) {
    [ — cut — ]
    “is_readable” will check if file really exists.

  46. Heron says:


    I’m having trouble getting bucket logging to work. I have a private bucket “” and another private bucket “company-access-logs”. It is my understanding that this:

    $s3->setBucketLogging(‘’, ‘company-access-logs’, ‘foobar-’);

    should enable logging for bucket into bucket company-access-logs. However, I get the following error:

    Warning: S3::setBucketLogging(, ): [InvalidTargetBucketForLogging] You must give the log-delivery group WRITE and READ_ACP permissions to the target bucket in /var/www/localhost/htdocs/s3/S3.php on line 491

    Any idea what I’m doing wrong?

  47. Heron says:

    Solved via e-mail. Thanks, Don 🙂

  48. Chris Savery says:

    Hi Don,
    I modified your copyObject slightly to make a new setObjectInfo that allows for replacing meta data on objects without copying using x-amz-metadata-directive. This was very useful for me to fix Cache-Control headers on lots of images already uploaded. I’ve used it and it appears to work but one thing I noticed is that a new Content-Type needs to be supplied or else browsers may not respond as expected. I also added an acl parameter so that could be changed too. I added the code here but it doesn’t show well at all. Please email me and I’ll give it to you as it may be handy for others too. It was also nice for removing junk headers added by s3fox – didn’t like that stuff showing up anyway.

  49. Giraldo says:

    A little help on creating folders.

    Just add the folder name to the file name (not the temp file name).

    So it would be like:
    $s3->putObjectFile($fileTemp, $bucketName, ‘images/’.$fileName, S3::ACL_PUBLIC_READ, array(),‘image/jpg’);

    Remember to add the contentType at the end if you want to be able to view your files via the browser.


  50. mahendra says:

    i have some confusion with listBuckets() function
    bcaz it shows the list of bucket which you had created but i have to validate new bucket name by matching all over the bucket names so how can i do that??
    is there any reference or function for the same???

    thanks in Advance

  51. Heron says:

    Are you asking how to tell if a given bucket name is taken globally? You can’t, except by trying to register it. The best you can do is test it against the list of buckets you have registered yourself – that is, the ones returned by listBuckets.

  52. mahendra says:

    hi Heron
    Thanks 4 replying..
    but actually when i use to create the bucket using s3 class as fuction putBucket ($bucketname) so what it does it just over right that bucket and updates the time of creation and no error received that bucket already exists or any error code

    that’s why i am confused that why it is doing so either i am doing something wrong… or pl suggest me…

  53. Norio says:

    Hey bro 🙂

    Blog is coming along nicely. Loving the tutorials! About to code something up for my peeps using the S3 class. Thanks!

  54. mmoney says:

    Your S3 PHP Class rocks!! Love that it does not require anything more than php 5.2. and I got my work done entotale in about 5hrs today. That time does include tweaks for human readable output. THANKS!

  55. Heron says:


    I’m not sure what PHP’s behavior is; however paying for bandwidth twice (uploading to your server, then uploading to Amazon) seems superfluous. To help with this, Amazon provides a method to POST a file directly to your bucket, given your authorization, and then redirect to a page of your choice. You can find more information here:

    Enjoy 🙂

  56. jasonmog says:

    there’s a bug where sprintf blows up if there’s a % in the variable value in your sprintf calls. use %s placeholders everywhere and move the variables into the argument list

  57. Brian says:

    I have a lot of data to load into S3 so I did some benchmarking. I PUT 1000 small objects into a bucket with both classes:

    CURL version took 7.5 minutes.

    PEAR version took 4.5 minutes.

    Of the 7.5 minutes, 6.5 are spent inside putObject’s $rest->getResponse() call.

    I hope this will encourage you to try and optimize your class. I still plan to use it in production either way though so thanks for making it 🙂

  58. Chris Snyder says:

    @Malone, @Don Regarding empty files, there is a bug in the S3.php code.

    There are two conditional statements in S3::putObject that include $input[‘size’] > 0. If you replace those with $input[‘size’] > -1 it works fine.

    I’m not sure why the check is there in the first place, but anyway this fix will still prevent negative sizes.


  59. Chris Snyder says:

    Found another bug, in that CURL is not verifying the SSL certificate coming from If we care enough to use SSL, we should definitely be checking the certificate. 😉

    Anyway, issues have been added at for this and comment #127.

    Thanks for such a useful class!

  60. Don says:

    @Chris: Regarding SSL verification: it is not a bug – CURL will spit out warnings otherwise.

  61. Don says:

    @Chris: The 0 byte check exists because I never expected people to upload empty files 🙂 Will change in the next update anyway.

  62. Don says:

    @Brian: Sorry to say it but this isn’t a meaningful benchmark. Firstly, I cannot see how PHP code could be faster than C code. Secondly, the PEAR version buffers files in memory and using CURL we are actually able to have optimal unbuffered read/writes.

  63. David says:

    Don – Brilliant class, implemented in no time at all!

    Are you planning to add support for folders within buckets?

  64. Kevin says:

    Hi Don, first of all, great work. I’m using your class to back up my database to S3. Quick question, what are the advantages or disadvantages of your class versus a PEAR based class? The class offered by uses two PEAR extensions (HTTP_Request and HMAC). Is there a speed difference? Which class is better at buffering and streaming large files? Thanks for your help. Keep up the good work.

  65. Don says:

    @Kevin: There are a number of advantages – but the fact that there are no PEAR dependencies is usually enough for most. And if you are copying large files, this is the one you want to use.

  66. Don says:

    @David: There is a way to emulate folders by using “_$folder$” – but you would have to look into it… If you create a folder with S3Fox and retrieve a list using this class you will see how it is done.

  67. Kevin says:

    Hi Don, one more question. Are we still limited to 2GB files max? Thanks.

  68. Don says:

    @Kevin: Yes. See for more information.

  69. Kevin says:

    Don, when we add a delimiter to getBucket(), do you roll up the prefix? I notice the results still have all the prefixes attached to the front of the key names. Thanks.

  70. Sven says:

    Hmm, this is not working for me. Tried it in various hosting environments and I’m still experiencing the issues outlined in comment #14:

    “When I try to upload large files it will just stop after a couple minutes. It gives no error.”

    Any ideas?

  71. Bryan says:

    @Sven I’m having the same problem but only on large files. I tried setting CURLOPT_TIMEOUT to a high number to no avail.

  72. Heron says:

    @Sven and Bryan: If you’re using this class to upload from a user’s computer to an S3 bucket, you may want to consider POSTing directly to the S3 bucket, as explained in this article:

    If, however, you’re using it to upload files from your server to S3, you’ll need to figure out what your Apache settings are and tweak them so Apache doesn’t time out.

  73. David says:

    @141 – The following PHP specific tutorial is good, and uses AJAX to upload files

  74. Sam says:

    I have a question similiar to Post 103. I have a file that is private and needs to be automatically downloaded. How is this be done with the getObject? Or is the best way to use “Query String Request Authentication Alternative?” Thanks.

  75. Heron says:

    Sam: You need the user’s browser to be able to download it? You’ll want to generate a temporary URL giving permission to download the private file, as in this article:

  76. marek says:

    I’ve got following problem – sometimes (10% probability?) when uploading to S3, my php (CLI, 5.2.0-8+etch11, libcurl/7.15.5 OpenSSL/0.9.8c zlib/1.2.3 libidn/0.6.5) hangs there forever eating 100% cpu. Strace shows only those two lines again and again…

    poll([{fd=4, events=POLLOUT, revents=POLLHUP}], 1, 0) = 1
    poll([{fd=4, events=POLLOUT, revents=POLLHUP}], 1, 1000) = 1

    It happens in putObjectFile() uploading cca 750MB file.

    Any idea what the problem might be?

  77. Sam says:

    @Heron: Thanks for your help.

  78. pixelterra says:

    Your class didn’t allow for authenticating via query string:

    function getAuthenticatedUrl ($bucket, $resource, $expires_in) {
    $expires = time() + $expires_in;
    $string_to_sign = “GETnnn{$expires}n/{$bucket}/{$resource}”;
    $signature = urlencode(base64_encode((hash_hmac(“sha1”, utf8_encode($string_to_sign), self::$__secretKey, TRUE))));

    $authentication_params = “AWSAccessKeyId=” . self::$__accessKey;
    $authentication_params.= “&Expires=$expires”;
    $authentication_params.= “&Signature=$signature”;

    $link = “http://$$resource?$authentication_params&#8221;;

    return $link;
    //url encode this?
    //echo “<a href=”“.htmlentities($link).””>Authenticated Link</a>”;


  79. Pushpesh says:

    I am using copyObject() function for copying various images within a bucket. It creates a copy of the image, but when i try to view the image, it says “Access Denied”…Could you figure out the reason.
    Many thanks.

  80. Pushpesh says:

    In reply to my earlier query (no. 148), i have found a workaround for the stated problem. All i needed to do was to add the following :-
    $rest->setAmzHeader(‘x-amz-acl’, self::ACL_PUBLIC_READ);
    after the line
    $rest->setAmzHeader(‘x-amz-copy-source’, sprintf(’/%s/%s’, $srcBucket, $srcUri));
    in the copyObject(…) function.

  81. Don says:

    @pixelterra: Added getAuthenticatedURL()

    @Pushpesh: Added an ACL parameter to copyObject()

  82. marc breuer says:

    great stuff, worked fine for me! have only done some simple uploading and deleting of objects, but I don’t really need more than that.

  83. Mike says:

    I’m not sure what causes this, but when I tried the script below:
    if (!defined(‘awsAccessKey’)) define(‘awsAccessKey’, ‘MyAccessKey’);
    if (!defined(‘awsSecretKey’)) define(‘awsSecretKey’, ‘MySecretKey’);

    $s3 = new S3(awsAccessKey, awsSecretKey);
    $a = $s3->listBuckets();
    echo “S3::listBuckets(): “.$a[ 0 ].”n”;

    define(‘BUCKET’,$a[ 0 ]);
    $RESPONSE = $s3->getBucket(BUCKET);

    it outputs as:
    S3::listBuckets(): [My_Correct_Bucket_Name]
    Warning: S3::getBucket(): [NoSuchBucket] The specified bucket does not exist in …./s3/S3.php on line 134

    Something with my script, or with my S3 account? Thanks

    *I have to add spaces in $a[ 0 ] to avoid autolink

  84. Heron says:

    Mike: You’ll want to make sure there are no extraneous newlines or anything in $a[ 0 ]. Also, does it work if you do $s3->getBucket(‘putbucketnamehere’); manually?

  85. Joao Pinto says:

    Dear Don Schonknecht,

    Congratulations for S3.php file. Great job!
    However this class doesn’t allow list of objects based on queries, because you have an error on your S3Request class, but I already fixed it.
    Basically you weren’t parsing the uri’s “querystring” on the “StringToSign” (the “resource” variable) on the S3Request class.
    If you check the AWS-S3 tutorial, it says when you have a querystring on the uri, the StringToSign only should contains the bucket and the request_uri. This means you should only append, to the StringToSign, the path part of the un-decoded HTTP Request-URI, up-to but not including the query string.
    For more information see the 3rd example of the url.

    Like I told you before, I already fixed this bug.
    Basically I added/changed the following lines on the S3Request constructor function:

    $resource_uri = strpos($this->uri, “?”) > 0 ? substr($this->uri, 0, strpos($this->uri, “?”)): $this->uri;
    if ($this->bucket !== ‘’) {
    $this->resource = ‘/’.$this->bucket.$resource_uri;
    $this->headers[‘Host’] = $this->bucket.’’;
    } else {
    $this->headers[‘Host’] = ‘’;
    $this->resource = strlen($this->uri) > 1 ? ‘/’.$this->bucket.$resource_uri : $resource_uri;

    This means the S3Request constructor has now the following code:

    function __construct($verb, $bucket = ‘’, $uri = ‘’) {
    $this->verb = $verb;
    $this->bucket = strtolower($bucket);
    $this->uri = $uri !== ‘’ ? ‘/’.$uri : ‘/’;

    $resource_uri = strpos($this->uri, “?”) > 0 ? substr($this->uri, 0, strpos($this->uri, “?”)): $this->uri;
    if ($this->bucket !== ‘’) {
    $this->resource = ‘/’.$this->bucket.$resource_uri;
    $this->headers[‘Host’] = $this->bucket.’’;
    } else {
    $this->headers[‘Host’] = ‘’;
    $this->resource = strlen($this->uri) > 1 ? ‘/’.$this->bucket.$resource_uri : $resource_uri;
    $this->headers[‘Date’] = gmdate(‘D, d M Y H:i:s T’);

    $this->response = new STDClass;
    $this->response->error = false;

    If you have any other question, don’t hesitate to email me to

    Joao Pinto

  86. Joao Pinto says:

    Dear S3 Owners,

    I’m changing the S3 class functions, to return a list of objects and a list of Objects Info, if the uri contains a query string.
    I will send you the new code to your email or post it here, when will be ready.
    Can you send me your email to, please?

    Joao Pinto

  87. Kevin says:

    Any chance you might support Cloudfront in the future?

  88. Setting metadata headers isn’t quite working… I’m using…


    …and then including $metaHeaders in the putObject instruction.

    I’m getting back:

    x-amz-meta-content-type: image/png
    Content-Type: application/octet-stream

    What am I doing wrong?

  89. Don says:

    @Kevin: Written some code for CloudFront – it will most likely appear in the next release.

    @James Cridland: The $metaHeaders are the x-amz-meta-* headers. What you need to do is use $requestHeaders[‘Content-Type’] = ‘image/png’

  90. Closer, but still no dice. Now using putObjectFile – Content-Type works fine (as a part of the call), but


    … doesn’t really do the job, returning

    x-amz-meta-cache-control: max-age=315360000

    I could really do with setting the cache-control properly…

  91. OK, worked a workaround.

    $transfer->putObject($transfer->inputResource(fopen($file, ‘rb’), filesize($file)), $bucketname, $object_name, S3::ACL_PUBLIC_READ, array(), $requestHeaders)

    That works, though not using putObjectFile. But it works. So. Good. 😉

    (That sets a ten-year cache on that image, btw.)

  92. Chris Snyder says:

    Hey, CloudFront support already, nice! Didn’t there used to be a PayPal link around here somewhere?
    Well anyway, thanks for keeping up with the latest and continuing to improve the class.

  93. arthur says:

    Question about licensing- I’ve create a Drupal module that uses S3 as a storage option ( and currently uses a different S3 library which does not use curl. I’m interested in using your library, but to include it in Drupal’s CVS repository, it would have to be under the GPL. Would you consider releasing it under the GPL as well?

    Thanks so much

  94. Heron says:

    Arthur, this S3 class is released under something like the BSD license (which is more permissive than the GPL). I’m not a lawyer but I don’t think there’s a problem with using the S3 class in conjunction with GPL’ed code, as long as you retain the license information given in the S3.php file and maintain a clear separation between your code and the S3 class.

  95. arthur says:

    Heron- unfortunately, the rules for the Drupal CVS repository are strict as far as I know- GPL or include a link to a location where the user can download the file. My preferences is obviously to include the class in the repository which is why I’m asking. I’m not a lawyer either, but I do know that every time the MIT/BSD/GPL thing comes up people seem to loose their minds which is why I’m trying to ask nicely 🙂

  96. Kevin says:

    Have any of you tried changing the access control policy of objects in S3? I’m trying to add AllUsers READ access but it doesn’t seem to be saving the changes properly.

    First I use getAccessControlPolicy on the object to get $acp, then I add:

    $acp[“acl”][] = array(“type” => “Group”, “uri” => “;, “permission” => “READ”);

    Then I set it back using setAccessControlPolicy but it doesn’t seem to be storing the change.


  97. Kevin says:

    Oops I apologize. I figured out the problem. Sorry, I wish I could delete the last comment. Thanks again for your work Don!

  98. Mat says:

    Thanks a lot for this class ! It’s clean and very helpfull 🙂

  99. david says:

    Nice class. Just wanted to let everyone know about a problem I ran into using this. I was getting [NotSignedUp] Your account is not signed up for the S3 service. You must sign up before you can use S3.
    I am running on a Windows machine and it is apparently an issue with the SSL cert on Amazon and my windows box. I’ve turned off SSL for now and will go back and investigate when I get everything else working. Just didn’t want anyone else to bang their head against a wall on this one.

  100. Sune Kibsgaard Pedersen says:

    Great class, very well written.

    It would be nice if it was possible to get the CommonPrefixes out when doing getBucket() with a delimiter

  101. Joao Pinto says:

    Hey guys,

    I created a new Google project with a file manager to the Amazon S3 service.
    Please visit the following page:

    I’m searching coders to help me improving this amazon file manager project.

    Joao Pinto

  102. sam says:

    Maybe I am miss-using the function, but for putObject(), if you pass an empty value or false for $metaHeaders there is no error checking so on line # 349 you get an error when it tries to run a foreach on $metaHeaders.

    Is this by design, am I supposed to pass a different (empty) value when I don’t have any metaHeaders but do have requestHeaders?

    $s3->putObjectFile($filePath, $bucketName, $fileName, S3::ACL_PUBLIC_READ, ‘’, $requestHeadersArray);

    Using version 0.3.9

    BTW, thank you very much for creating this class, it’s been very useful!

  103. Heron says:


    If you pass array() as that parameter, you should be fine. For example:

    $s3->putObjectFile($filePath, $bucketName, $fileName, S3::ACL_PUBLIC_READ, array(), $requestHeadersArray);

    That way foreach will accept the parameter as valid but will do nothing since the array is empty.

  104. Lint Filter says:

    Wow S3 isn’t too frustrating… I appreciate you making this class. I’m very surprised Amazon didn’t make this already though.

  105. Heron says:

    Lint Filter,

    Amazon made the REST API (and the SOAP API), and they also provided examples of those APIs being used in various languages. Trouble is, their example implementations are fairly basic and not particularly general-purpose. That’s where Don’s S3 class comes in 🙂

  106. Joao Pinto says:

    Hey guys,

    I found out a bug on the S3 class,in the putObjectString function.
    Basically if you try to create an object through the putObjectString function with a numeric input, the S3 class returns a php error.
    So the putObjectString function should resolve these kind of issues.
    I propose the following:

    public static function putObjectString($string, $bucket, $uri, $acl = self::ACL_PRIVATE, $metaHeaders = array(), $contentType = ‘text/plain’) {
    if(!is_string($string)) {
    if (is_object($string) && in_array(”__toString”, get_class_methods($string)))
    $string = strval($string->__toString());
    $string = strval($string);
    return self::putObject($string, $bucket, $uri, $acl, $metaHeaders, $contentType);

  107. Hey guys. I’ve got a problem. I read in other comments about making the file private and wanting to control access to the file when downloading. I don’t want to do the temp url thing, I’m not sure if that is secure enough (maybe it is, if so I’d like to know why 😉 ) but instead, I’d like to use the getObject method to grab the file information and put that into somesort of php file object and then do the file output where you create the File headers, then do a readfile($filename); and it send it to the browser.

    I could potentially just save the file down to the hard disk, and server it up, then delete it once it’s been downloaded, but that may prove inefficient. I’d like to just have it load in memory, never touch the hard disk, and load right into a php file object and then pump that out to the page (espically for images etc.)

    Any thoughts? I’d like to use the direct link to the file on amazon, i just want to make sure i can control access to the file.

    Thanks in advance!

  108. Heron says:


    A generated temporary URL is about as secure as it gets. You set the expiration time; I would suggest setting an expiration two or three minutes in the future. Yes, the URL works for anyone, but only during that two minute window, and if your user wants to share the file they could just do it after downloading it anyway.

    The best option is to control whether or not you provide the signed URL in the first place; for example, I put the full version of my software in a bucket, and provide a signed temporary download URL for the user when they’re logged in to my website. That is, they only get the URL once I’ve already verified who they are, and even then the download URL is only valid for a few minutes. I can’t stop them from throwing the program up on BitTorrent or something, but that’s an impossible dream anyway.

    I’d recommend against downloading the file to your server and then serving it to the user; you lose one of the best benefits of using S3 (which is that Amazon’s pipe is bigger than yours).

  109. Jason says:

    No worky.

    I hate the fact that I am posting here with problems but I just can’t make it work.

    I get “Warning: Invalid argument supplied for foreach() in /srv/www/htdocs/upload/page.php on line 49”

    Which isn’t a big deal. The main problem is that whenever I try to upload an image (tried many file types including jpg, jpeg, gif, etc) I get “Something went wrong while uploading your file… sorry.” With no direction as to where I went wrong.

    I tried creating a new bucket by hand and using it, I even tried letting it create one. It’s not the bucket name cause as soon as it fails I can go and create one with the same name. There is no real error reporting with this class to help me figure it out so I have to post here.

    Any help/guidance would be greatly appreciated.

  110. Heron says:


    If you e-mail me your code for page.php (be sure to remove your AWS keys!) I may be able to help you figure out what’s going on.

    – Heron

  111. sharad says:

    Used the library for the first time today. Really easy to use. I have two questions/comments

    a) I assume it the caller’s reponsibility to retry if an InternalError is returned.

    b) I did a putBucket on the same bucket multiple times today. They all indicated success. Should’nt I be getting a BucketAlreadyOwnedByYou ?


  112. David J says:

    Anyone know what might cause this?:

    PHP Warning: S3::putObject(): [MalformedXML] The XML you provided was not well-formed or did not validate against our published schema in includes/S3.php on line 357

    I can list buckets ok, but putObject with a string for the input parameter generates that (and doesn’t work).


  113. David J says:

    @David J – to answer myself 🙂

    It occurs when the bucket is the empty string.

  114. I love this class, but I recently ran into an issue with a server that does not have any certificates installed. For situations like these, I’d like to see one or more of the suggested solutions added…

    1: allow the user to specify a define to turn off CURLOPT_SSL_VERIFYPEER. e.g.

    curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 1);
    if( defined(‘S3_CLASS_SSL_VERIFYPEER’) )
    curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 1);

    Then all a developer needs to do is add the following line to their code before using the S3 class:
    define(‘S3_CLASS_SSL_VERIFYHOST’, 0);

    2 and 3: Another option(s) would be to allow us to define the path for CURLOPT_CAINFO and/or CURLOPT_CAPATH so we can specify the .crt files.

    Thanks again for a great S3 class!

  115. R. Moose says:

    Is there any way to use this class but somehow ‘encrypt’ or ‘sign’ an access key so that if the code gets taken, I can disable the access key only for this script?

    The problem is that our site has multiple developers working on it and I really don’t want to just hand out the keys for everyone to have access to.

  116. Heron says:

    @ R. Moose:

    Unfortunately no, there is no way to do that. If the key gets taken, you’ll simply have to generate a new keypair from your AWS account.

    One solution, would be to create a web service with your own authentication mechanism that does the interaction with S3 itself.

    However, there is a larger problem here. If you don’t trust your developers with your access key, why do you trust them to write the code in the first place? Being able to revoke an access key won’t prevent them from (for example) deleting the contents of your buckets, sabotaging your code, etc etc. I’d recommend hiring trustworthy developers 😉

    Non-disclosure agreements can cover this type of thing, giving you legal grounds to sue them into oblivion to recover your costs if they use such secrets to damage your company. It also doesn’t hurt to change your access keys whenever a developer leaves the company.

  117. Dave says:

    Thanks for some great work — nice to use well-done code.

  118. Ricky de Jong says:

    Great class!
    Congrats! 😀

    I’m, sort of, disliking echoing ‘$s3->getAuthenticatedURL’ but that’s me.

    But talking about the expire time. Let’s say it’s set to 2 minutes. Does that mean if you download a file that would take 3 minutes to download, you will get an error after 2 minutes because of lacking access to s3?


  119. Heron says:


    No, the download will continue as normal. The expiration time is for the start of the download (that is, for the moment when the link is clicked).

  120. Don says:

    @Ricky: With regards to getAuthenticatedURL(), see what Heron said above.

    What you want to do is create a link with getAuthenticatedURL() as the user clicks, so: /download/fileId redirects to the getAuthenticatedURL() link. This way the link is always valid.

  121. Ricky de Jong says:

    Thanks for your quick reply, Don and Heron 🙂


  122. Ricky de Jong says:


    Just a small note: the expiration time for the getAuthenticatedURL is when it’s been generated and not when you click on it 😀

  123. Ben Garrett says:


    Firstly, many thanks for the class – it’s just what I was looking for.

    Unfortunately I’m getting the same timeout errors mentioned by others in earlier posts, when calling putObjectFile. The timeout occurs trying to put a small file (less than 3k), and it’s the second putObjectFile on the same S3 object – the first call works perfectly. Any ideas?


  124. Heron says:


    What I meant was, the expiration date determines the range of time during which the link may be clicked on. If the link is clicked outside of that time frame, it is invalid.

    So, the link may be clicked on (or visited in some other way) at any time between the moment it is generated and the expiration time.

  125. Kirby says:

    First, awesome resource! Currently, I am creating a class which takes a bucket of files and matches it with its backup bucket. If the file is not there in the backup, then it copies and pastes it in the backup bucket. Just a simple backup program between two buckets in one S3 account.

    I am running into a problem. The getbucket function is not grabbing all of the files in the first bucket(we have a ton). Any idea of whats going on? Is it the PHP function’s fault? Is there a limit of how many files one can grab from a bucket?

  126. Jonathan says:

    I’m getting the timeout too, but only on the second file upload. What I’m doing is:
    1.) putObjectFile to upload (/tmp/temp.jpg)
    2.) resize image to make thumbnail to the same file (/tmp/temp.jpg)
    3.) putObjectFile to upload thumbnail (/tmp/temp.jpg)

    If during the resize I create a new file (/tmp/temp2.jpg) then it works fine.

  127. Don says:

    @Ben, @Jonathan: Think there is an issue with resources being locked/not cleaned up.

    @Kirby: Will look into it.

  128. Heron says:

    Isn’t there a “paging” sort of thing going on when you list the contents of a bucket? That is, I was under the impression that listing the bucket contents can only happen a thousand objects at a time.

  129. Kevin says:

    Love the code, however updated to 0.3.9 and it seems to still have issues with files of 0 bytes. I keep getting the Missing input parameters error. I see this was mentioned before in the comments, is this still in the todo’s?


  130. Heron says:


    After fiddling with 0.3.9’s source for a while, I’m still finding that the only way to put a 0-byte file onto S3 is by POSTing directly to S3 (via an HTTP POST, that is), as described here:

    It’s worth noting that it’s not just 0.3.9 that has this issue with 0-byte files; the S3Fox Firefox extension won’t let you upload 0-byte files either.

  131. Don says:

    @Kevin: With regards to 0 byte files, you can get around it by setting $requestHeaders[‘Content-Length’] = 0; when using putObject().

  132. Don says:

    @Kirby: I had a look into this and could not reproduce your problem. I’m inclined to think your problem is related to a memory limit or perhaps even a bug in your PHP or libcurl versions.

  133. Kevin says:

    I wasn’t able to get the $requestHeaders[‘Content-Length’] = 0; to work, it then just gave me another error which I don’t have in front of me, something about the method has not been implemented or something. But I’m managing my files database locally so I’m able to adjust my logic to make everything work fine. Thanks for doing the S3 integration for me. 🙂

  134. Lionel Morrison says:

    Is there any known issues with ZendCore v2.5 install (PHP & Apache2) and OpenSSL 0.9.8i. The PHP info page says everything is installed but everything seems to fail unless I pass ‘false’ along with the S3 constructor. Other than that I have got this to work on other installs with out issue. Thanks.

  135. Heron says:


    The third parameter to the constructor is whether to use SSL when communicating with S3. If you get errors with S3 enabled, you might need to check to make sure you have updated certificates installed on the server.

    If you’re not sure, or they look like they’re installed, you don’t lose much by using a non-SSL connection; there isn’t any sensitive data in the request URL, so unless you’re storing sensitive data there isn’t a problem.

    If you are storing sensitive data, you should be encrypting it on your end before uploading anyway 😉

  136. Max says:

    Great peace of work!
    Doeas anybody know how to retrive the total number of files in a bucket? I need this for pagination…

  137. Heron says:

    @Max: The S3 class will grab all of the items in the bucket unless you specify the max number of items to return. There’s no way to get just the total number of items in the bucket, but you could cache the results of a single call to getBucket and paginate the results yourself (instead of using the class to do it).

  138. popnoart says:

    Great class, very easy to use!! Thanks a lot!

    One question : Does anyone know how to delete a folder inside a bucket? I know how to delete the files inside the folder but I’m not able to delete the folder itself.

  139. Heron says:

    @popnoart: Buckets can’t actually contain folders. Some S3 clients (like S3fox) pretend there are folders based on the prefix of object names, but there aren’t really folders in buckets.

  140. Gerry says:

    This seems like a great class, but the documentation seems a bit lacking.

    1. In your example.php file you call putObjectFile() but in the description for putObjectFile() it calls it a “legacy function”. So I’m not sure if I should be using it or not.

    2. The class documentation ( ) seems incomplete as it does have any descriptions telling me what the functions do.

  141. Don says:


    putObjectFile() will always exist but for additional functionality (that was added much later on) you want to be using S3::putObject(S3::inputFile()) in its place.

    If you mouse over the method names on the documentation page there are descriptions available for each. But you’re right, the documentation definitely needs work – although I think there is enough information available since most folks don’t have trouble figuring things out 🙂

  142. Gerry says:

    Yes I saw the mouse overs. Ok well let me use putObject as an example, here is the definition:

    putObject (mixed $input, string $bucket, string $uri, [constant $acl = S3::ACL_PRIVATE], [array $metaHeaders = array()], [array $requestHeaders = array()])

    and the mouseover description is:
    “Put an Object”

    So the description didn’t add anything to what the name had already given me, I have no idea what the arguments are for or how they change what the function does. I understand how it’s hard to write descriptions for functions that you know inside out as everything seems obvious to the author, but I really have no clue as to what $metaHeaders is for. I have to look into the code and work out what everything in there does to figure it out, which kinda removes the point of having documentation.

    example ripped from the php manual:
    mixed str_replace ( mixed $search , mixed $replace , mixed $subject [, int &$count ] )

    This function returns a string or an array with all occurrences of [search] in [subject] replaced with the given [replace] value.

    Don’t get me wrong, I’m awful with my own documentation and in telling you all this I’m learning myself about how I should document things in future.

  143. onassar says:

    Quick question.
    I’m working with some web services which automatically push content to my S3 bucket for me.

    The problem with this, is I’m not able to set the headers (eg. Expires) since their web service automatically pushes it on my behalf.

    Do you have any ideas on how best to retrieve an object, and set headers? I looked through your source, but didn’t notice any examples to specifically deal with an object that already exists, and changing it after it’s been put on S3.


  144. onassar says:

    Figured out that the way to do this is to make a call via copyObject, stating the same source uri and bucket as the destinations.

    The exception here, being that you then must set the headers for the destination object.

    This class doesn’t have this, and I’ve tried hacking it but it wouldn’t seem to work.

    Any thoughts?

  145. parag says:

    hi , we locate our media files at amazon s3.When a user request 4 download from Website (say x) , the file will be read at amazon & putted to X. How about directly downloading the file at users location.?

  146. miguel says:

    This may be a silly question…are the calls this class makes secure by default or do I have to specify that somehow? I mean, can someone sniff the “secret access key” Amazon provides us or is it encrypted? Thanks, sorry to be a dummy, I am new to this! 🙂

  147. Heron says:


    The secret key cannot be determined by an eavesdropper, whether or not you’re using an SSL connection. Don’t worry, it’s safe 🙂

    Look at the third parameter to the class constructor. By default, the class uses SSL to communicate with Amazon S3.

  148. bhimrao says:

    How to check expiry to the image?

  149. Hi Don,
    First of all great work!
    I just wanted to let you know that when I try to use the putObjectString, the request hangs for 2 – 3 minutes and then returns true finally.

    This is the code I’m trying:

    //put body on S3
    require_once(LIBRARY_PATH . “/s3/S3.php”);

    if($s3->putObjectString(“hello loco”, ‘blovelspot’, ‘test/loco2.html’))
    print “success”; die;
    print “fail”; die;

    Any ideas what I’m doing wrong?

  150. bhimrao says:

    How are you checking expires headers?

  151. Heron says:

    @bhimrao: To which headers are you referring?

    @Jorge: Can you duplicate the issue on another computer (preferably using a different ISP)? I haven’t seen anything similar.

  152. bhimrao says:

    i want to check the given expires header to the image
    that it is properly given or not

  153. Hi Heron,
    I am doing this from an Amazon EC2 instance, so I’m sure it’s not related to the ISP. I did read in some Amazon forums there was some hanging on some file uploading operations when the files are not images but text (the objects I’m trying to upload are HTML files), so what I ended doing was saving the HTML object first on the /tmp directory of the instance and then doing a putObjectFile. That works perfectly, so I’m sticking with it.

    Thanks again for a wonderful library!

  154. Frank Unwin says:

    What an excellent and simple implementation. First impressions are fantastic, not tried all the features.
    Many thanks for your contribution.

  155. Heron says:

    Jorge: I’m glad you’ve got it working. Come to think of it, I don’t think I’ve ever used putObjectString. I’ll pester Don into testing it sometime 😉

  156. Kevin says:

    Great stuff but how do I just list files in a subfolder of bucket. e.g.
    $bucketName=“thebucket”;// but what i really need is thebucket/this/that/theother”;

    $contents = $s3->getBucket($bucketName);
    print_r($contents); // too much info – just want a sub listing!

  157. Heron says:


    S3 doesn’t support folders. Clients like S3Fox only imitate folders for convenience; as a result, this S3 class doesn’t support folders either.

    The best thing you can do, therefore, is use the $prefix parameter to getBuckets.

    For example:

    print_r($s3->getBucket($bucketName, ‘this/that/’));

    This would list the objects in $bucketname whose names start with ‘this/that/’ (which is how S3Fox emulates folders).

  158. Kevin Yeandel says:

    Hello again,
    A little question, what’s the best way to move a file – I would rather not get and put. e.g. mv, unlink/link if it is possible. Many thanks for this code.

  159. Steve says:

    Amazing Amazon S3 Class!

    Is there a way to to use wild cards in deleteobject? i.e. Somthing like; deleteObject($bucketName, ‘this/that/*.mp4’)

    Also is it possible to copy from a bucket in one account to bucket in a different account? (I own both accounts.)

  160. Heron says:

    Kevin: You cannot “move” an object; there is nowhere to move it from or to. A bucket is simply a collection of objects, an object is either in the bucket or it is not. You can copy it from a bucket to the same bucket with a different name, and then delete the old object, and that’s as close as you can get to renaming it.

    Steve: To your first question, no, you cannot use wildcards. S3 is not a file system and does not understand wildcards.

    You cannot directly copy from a bucket in one account to a bucket in another account using a single S3 service call; you’ll have to download it from the first account and upload it to the second account.

  161. Steve says:

    @Kevin – Thanks. My bad. Perhaps I didn’t ask the right question for my first question. How about Something like; DeleteObjects($bucketName,’/this/that)

    For example I upload 150 objects (from my PC)to S3 and catalog them in a database. Then to delete them I iterate through them thus;

    While($row = mysql_fetch_array($result))


    $filename = $row[“FileName”] . “.mp4”;

    $S3File = S3Folder/$filename”;


    This takes 48 seconds. However do to the same thing from my PC using:

    “rescmd s3 delete-objects aws-key:awskey aws-secret:awssecretkey bucket:S3B key-prefix:S3Folder”

    takes only 8 seconds.

    I’m not sure what rescmd does in this case but I can’t think that first its requesting a listing of all objects in the bucket with the prefix then making all the individual delete operations 6 times faster than my php script. Does it?

    That’s too bad about not being able to copy to another account. What do people to for backup? In the hopefully unlikely case that my secret key is discovered it would be nice to have my stuff backed up in another account without the hassle of first downloading them to someplace else and then uploading again to S3. What a needless carbon footprint that produces. 🙂

  162. Heron says:

    As far as backups go, you don’t need to worry about that; S3 handles redundancy and availability for you. Your data is stored on multiple physical drives in multiple geographically separated data centers (at least, that’s my understanding).

    As far as your secret key goes, well, keep it secret 🙂 But if it does get discovered, you can just log in to your Amazon AWS account and get a new key (which erases your old one). No need to maintain (or pay for) a second AWS account.

    I’ll e-mail the S3 devs and ask them the best way to delete multiple files like that 🙂

  163. Heron says:

    The S3 API only supports deleting one file at a time. I suspect the reason using the S3 class takes so much longer than rescmd is that the S3 class re-creates an internal object and re-initiates a connection with curl for every call to deleteObject, where as (I suspect) rescmd does not.

    This isn’t really a bad thing, but it’s not optimal if you’re calling deleteObject two hundred times in a row.

    Maybe I’ll write a function to do batch deletes more efficiently later. I’m at work right now 😉

  164. Steve says:

    @Heron. Thanks for looking into optimizing deleteObject that would be great. The other issue about backup I understand that S3 has storage redundancy but that just protects the data thats there. If someone got the key and delete files then they are gone right?

    [slightly off topic mode on:]
    (This is why I’m worried about backup and why I’d like to be able to copy to a separate S3 account.)
    I’m playing with some stuff that allows multiple users to upload stuff to my S3 account. I haven’t (yet) figured out how to set up ‘signed’ uploads. I’m close using this ( approach. I can generate the encoded policy and signature but can’t figure out how to get a command line utility to do the html post forms but I am making some progress. ( so what I’ve done in the interim (really bad idea but used to test the rest of the application for now) is encode my secret key and when the user wants to upload the client requests the encoded key, decodes it and uploads the files. (the key is never saved in a file but still a bad idea) So I’m struggling with this a bit. (Until I get past the interim solution I was thinking of copying all the files to another S3 account that has a more secure key.)

  165. Heron says:


    Yes, if someone deletes the files, they’re gone. Amazon does not keep extra backup copies anywhere.

    I do HTTP POST form uploads myself fairly often; if you control the server there is no reason for the end-user to ever have possession of your secret key. However, if you’re wanting a command-line utility to do the uploads, you may consider writing your own web-based API that the command-line utility can hit instead of trying to post directly to S3. That way you can have separate credentials for each user (your web API would ask for those), and you wouldn’t ever have to give end-users your secret key.

    I realize writing your own web API with credentials isn’t trivial, but it’s not too difficult either; it may be the best solution to your problem, assuming you have access to a web server on which to run your API.

  166. Fabrizio says:

    Is it possible to set a bandwidth limit for files upload?

  167. Heron says:

    Fabrizio, that’s not something that is controlled by the S3 class.

    Depending on how you’re uploading the file, you might be able to control it through apache or php, or even your operating system or router, but it’s not something we can really help you with, sorry.

  168. sakthi says:


    I have tried your code, but i got the following warning without result.

    Warning: S3::listBuckets(): [RequestTimeTooSkewed] The difference between the request time and the current time is too large. in S3.php on line 90

    can you please check and let me know.

  169. Heron says:


    You’ll want to make sure that the time on the machine running the PHP is set up correctly. (If it’s 15:00 UTC, the machine’s clock should be set accordingly, with the proper time zone if necessary.)

  170. First of all, thanks for a great class, I’ve used to create a script that mirrors a local folder structure to an S3 bucket.

    But running this on lagre folder, say ~4000 files all in all, fails. The php script eats up all the memory to the point that all the virtual machine does i swapping. Top reports mem usage well above 90% for the PHP sync job.

    Have you made any attempts helping the garbage collecotr free up memory using destructors and unsets?
    Have anyone else encountered this memory problem?


  171. Don says:

    @Lars: Heron and I are going to look into this.

  172. Troy Hakala says:

    I get the following error every single time I try to upload large (>200MB) files:

    Warning: S3::putObject(): [55] select/poll returned error in S3.php on line 358

    It sends data for a while and then the network traffic drops to zero and 30 seconds later or so it fails with the above error message.

    I’m using version 0.4.0 of S3.php, libcurl 7.19.5 and PHP 5.2.6 on Debian.

    Anyone else have this problem? Anyone know of a solution?

  173. Heron says:


    Do you have access to another machine? I’d like to know if you have the same problem uploading the same file from a different machine (preferably a different OS). That could help narrow down where the problem is (differences in kernel versions, libcurl verisons, etc etc.)

    Next time I boot into Linux I’ll try uploading a large file and see what happens.

    I can say, however, that my previous employer regularly uploaded 200+MB files without issues from a machine running Gentoo Linux.

  174. Troy Hakala says:

    I have another linux box that I also use to upload to S3 and the versions are mostly the same but libcurl is 7.18.2 and the same thing occurs on that machine. I don’t have any linux machine that is significantly different from these two as I generally keep up-to-date on all of them.

    I was assuming that it was a problem with S3 but I can use a different S3 client on a Mac that doesn’t use libcurl (it uses OS X’s NSURL* classes) and it uploads the large files successfully.

    I can give you an strace, if that helps.

    FWIW, I found someone else reporting the same problem and there’s supposedly a solution to it but I’m not a paying member of so I can’t see the solution:

  175. For those of you who run out of memory when uploading loads of files (using putObjectFile). Try to disable $useSSL (S3::$useSSL = false;).

    My memory use dropped dramatically. It seems like something is leaking when using ssl.

    Otherwise a very nice written class!

  176. alex says:

    I have this problem.
    When I upload multiple files simultaneously on amazon s3 photos are uploaded less than you selected.
    Example load at once 15 pictures on amazon s3 but I have only 10
    There are settings timeout or other?


  177. Heron says:


    Are you uploading them one by one? If not, how are you uploading them? Are you checking the return value of putObject to make sure it succeeded?

    – Heron

  178. alex says:

    i have resolved the problem but now when i upload the file, it is impossibile to upload file over 2 Megabyte.


    Is there a solution to increase this limit?
    depends Amazon S3 or S3 class?


  179. Don says:


    Have you checked your INI settings to make sure there are no memory limits?

    Try disable SSL (S3::$useSSL = false) – should give you unbuffered uploads.

  180. Paul says:

    is it possibile to set in the s3.php , privacy policy the max file size to upload?


  181. Mike P. says:

    Great Class – Thanks!

  182. Para says:


    I am trying to use this class to check if a folder inside a bucket is empty or not.
    Does anyone have any idea how I could do that?
    thanks in advance

  183. Heron says:


    There is no such thing as a “folder” in S3. Folders are emulated by simply adding a prefix to an object name; for example, I might name an object “pictures/me.jpg”. Some clients (like S3Fox) treat that as a file named “me.jpg” in a folder named “pictures”, but that’s not how S3 itself treats things.

    As a result, Don’s S3 class only sees objects including their prefixes. If you want to do the search yourself, simply list the contents of the bucket, and search for any that include the desired “folder” name in the object name.

    I hope that makes sense 🙂

    – Heron

  184. Naerk says:


    I have a problem with $s3->putObjectFile($uploadTmpFile, $bucketName, baseName($fileName), S3::ACL_PRIVATE

    If i use small files like 2 MB, everything is great. But if the file is bigger than 12 MB it loads a long time and then i get a timeout.

    If i uncomment the putObjectFile, my webserver has no problems to load files of 80 MB in php tmp dir.

    Any ideas?


  185. Heron says:


    Have you tried uploading a file directly to your webserver some other way (i.e. through FTP) and then seeing if putObjectFile can upload that file?

    If you can try that, it will help narrow down the cause of the issue.


  186. Tommy says:


    From the above comments I understand that your script is working fine.. But I am having an error when I apply this in my website. The program I put on the user side, here my plan is to upload the user profile images automatically to amazon.

    I am using script for uploading files to amazon s3. The buckets are listing properly but I am not able to upload files to buckets. I am getting the error,

    S3::putObject(): [60] SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

    and upload fails.

    Also not able to create new buckets too.

    Any solution..

    Thanks in Advance ..

  187. Ian Anderson says:

    @Tommy – it might be trying to use SSL – if you set the third parameter to false when instantiating S3, this might help

    class My_Service_Amazon_S3 extends S3
        public function __construct($awsUserKey$awsPrivateKey$useSSL = false)

  188. Niro says:

    Excellent class! thank you very much Don.

    I’m having the large file problem mentioned in earlier posts (Nic, Sven, Bryan). My files are 450MB in size. And the upload is from Linux virtual private server to S3 using putObjectFile. The symptoms are that the cpu consumption is stuck at 100% for HTTPD process.

    The information I could find here suggests that some setting of PHP/Apache/Curl are causing time out.

    Can anyone suggest which configurations to check? or other way to find out the exact reason?

    Thanks, Niro

  189. Don says:

    @Nori: SSL can cause some problems when uploading large files. Because the stream is encrypted, files need to be loaded into memory – and the 100% CPU usage you’re seeing is most likely the encryption.

    If you don’t need SSL, don’t use it.

  190. Niro says:

    Thanks Don, I tried it without SSL and it still gets stuck.

    Any other suggestions?

  191. Ian Anderson says:

    Awesome class, Don – it’s saving us masses of time. I’m writing a PHP class to sync our buckets to a local server for backup purposes, and I have a question regarding subfolders

    I have made it so that my script attempts to create a folder with the name of any “subfolder” objects returned by S3, and in my testing S3 has always returned them in the right order so that an object with name


    always appears in your listing after the “folders”


    Will it always return the “subfolders” in the correct order as above?

    i.e. can I trust it not to try to save a file into a folder I haven’t created yet?

    Many thanks indeed

    Ian Anderson

  192. Ian Anderson says:

    @Niro – How much RAM does your virtual server have? Maybe it’s going into swap

  193. Tommy says:

    Thank you Ian Anderson.. Thanks for your reply !! Problem solved..

  194. Anoop says:

    Great Script !! and nicely written. Thanks.
    I have a small issue, how can I put files to sub-folders. ?
    Eg:/ I want to store a file to bucket/sub-bucket1/sub-bucket2/. Here how can I store to ‘sub-bucket2’ ?

    Thanks in advance..

  195. Niro says:


    The server has total 1.8G, 400M used and about 1.4G free

    Also added to the script
    ini_set(‘memory_limit’, ‘128M’);

  196. Niro says:

    @Ian, Just run a test (looking at top while the file transfer is supposed to happen). Definately not a swapping issue.

  197. Para says:

    I have a question.
    Is there any way to get traffic statistics for an amazon s3 account(total downloaded bytes and total downloaded bytes) using this library?

    Thank you in advance

  198. Don says:

    @Niro: Remember that you’d need to set the memory limit in php.ini – if you are posting files also remember to the set post_max_size INI directive. Does seem to be memory lated though.

  199. Don says:

    @Para: There is, which is why the class contains some functionality for logging. I’ve been working on a S3 statistics app using this class – right now it parses S3 logs and stores the data in MySQL. I’ll try get around to packaging it for release in the next few days. Wanted to release it with a GUI but can’t find the time for that at the moment.

  200. Para says:

    I would really only need the total ammount of bytes downloaded and the total ammount downloaded? what function should I use I didn’t find one that handeled this?

  201. Don says:

    @Para: There is no function to do what you need. See the logging documentation. The class lets you enable/disable logging. You would still need to download those logs and parse them accordingly.

  202. cjsewell says:

    Hi there, I am trying to get the total size of a bucket, so far the only way I can do it is by looping through the response of getBucket.
    This works, but is extremely slow, especially for large buckets.
    Is there any better, faster way?

  203. Don says:

    @cjsewell: Unfortunately that’s the only way. Better way might be to maintain a local SQL based listing for buckets and use that to calculate sizes.

  204. Kirby says:


    I am creating a jpg bytearray in actionscript, passing it to php, and attempting to upload to the S3 using putObject. Unfortunately, I get stuck with an error that says “MalFormed XML…” I tried looking at the other threads but they weren’t clear on their answers. Here is my code:

    //it passings a jpg bytearray name “jpgData”

    $s3 = new S3($awsAccessKey, $awsSecretKey);

    $filename = “test.jpg”;
    $input = array();
    $input[“data”] = $jpgData->data;
    $input[“type”] = “image/jpg”;

    return $s3->putObject($input, “”, $fileName, “public-read”);

    Any ideas? Thanks and your class is awesome!

  205. laxmikant says:

    how i can create sub folder plz help me

  206. Corey Sewell says:

    Hi, I found a faster way to get sizes of buckets. I ended up making a class for downloading and parsing the reports you get from the AWS portal.
    Its related to your class so thought I would share. You can find out more here:

  207. laxmikant says:

    thanks buddy

  208. laxmikant says:

    in one bucket i m not able to uplaod the files by same code i m able to upload the file in another bucket. what’s reason any one can help me

    Any ideas? Thanks

  209. Heron says:

    laxmikant: Do you have write permissions in the bucket you’re trying to write to?

  210. Ian Anderson says:

    I have a complaint to make – it’s working too well and I don’t understand why 🙂

    When I call getBucket(), as I understand things, it should only receive 1000 objects from Amazon and the truncated property should be set to true so that I know to call it again

    But I’m receiving all the objects in my bucket anyway – about 1400 – and the truncated property is still set to true

    Does anyone else get this?

  211. Ian Anderson says:

    Ah – I think I see the magic that does it, sorry

    do { … } while ($response !== false && (string)$response->body->IsTruncated == ‘true’);

    When I echo out a marker at the top of the loop it only gets echoed once though, and it should get echoed twice – weird. But it works brilliantly – thanks once again, Don

  212. Heron says:


    getRequest actually retrieves responses in two different places. Right above the do-while loop is an if statement that checks if the first response was truncated, and if so, it enters the loop, where it checks for further responses.

    Does that make sense?

    – Heron

  213. ilija says:


    I am little bit confused with REST api and how to use it.
    My concern is if using this script file will get downloaded
    to server first before going to a client browser or is it a way to download file from s3 to client browser directly.
    I am looking for a way to skip double bandwidth (from s3 to serevr->server to client).

    Thank you.


  214. Heron says:


    This S3 class only allows client->server->Amazon S3 transfers. If you want client->Amazon S3 transfers, you’ll want to set up an upload form using POST.

    Read this example:

    Hope that helps.

    – Heron

  215. Khanh says:

    Thank you so much for such a useful class. I was able to use this class as expected to upload files to S3. I am running into an issue with any files larger then 10mb. I’ve changed the upload size in php to 20mb and still no luck. There are no errors being kicked back either.

    $s3->putObjectFile($fileTempName, $dest_bucket, $fileName, S3::ACL_PUBLIC_READ)

  216. Yousef says:

    Khanh, we were having the same problem so we tried all the usual PHP changes (post_max_size, upload_max_filesize, memory_limit, max_execution_time). None of them helped… In the end we figured out it was the Apache timeout stopping the script completing. Now it’s working fine for 50MB files. We’ll be trying larger ones later.

    Great class by the way!

  217. Hugo says:

    Is there a way to append a string to the name of the file while uploading it to the S3 server? ie: file to upload using the html form = my_file.txt once uploaded would become 1_my_file.txt

    Thank you

  218. Luk says:


    I just started to use this class, and I am having problems with example-cloudfront.php.

    Warning: S3::listDistributions(): [60] SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed in S3.php on line 935

    Even when I try to change the resource to 2009-12-01/distribution, the warning is the same. When I try to set $useSSL to false, I cannot obviously connect to the host… Besides, I can use listBuckets() only with disabled SSL, so I am not really sure what is wrong, or what could help.

    Any ideas?

  219. Luk says:

    Hi again :),

    is it correct, this class doesn’t support creating/listing/updating/deleting streaming distributions, or am i missing something??

    Thank you, Luk

  220. Arifcsecu(Bangladesh) says:

    i got this error. i don’t understand why this happen?

    Warning: S3::putObject(): [60] SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed in C:xampphtdocsS3.php on line 358
    Something went wrong while uploading your file… sorry.

  221. Arifcsecu(Bangladesh) says:

    when i sent
    S3::$useSSL = false;
    i got the error.

    Warning: S3::putBucket(abc, private, ): [RequestTimeTooSkewed] The difference between the request time and the current time is too large. in C:xampphtdocsnpS3.php on line 226

    Warning: S3::putObject(): [RequestTimeTooSkewed] The difference between the request time and the current time is too large. in C:xampphtdocsnpS3.php on line 358
    Something went wrong while uploading your file… sorry.

  222. Heron says:

    Hi Arifcsecu,

    Amazon S3 uses time to verify signatures; if the sigature says “10:00 am GMT” but it’s actually 5:00pm GMT Amazon will reject the request.

    Make sure the computer running the script has its time set properly – either set it to the correct GMT time, or set the time and the timezone properly for its location.

  223. Liam says:

    I have just started using your S3 class and when I try to upload a file I recieve the following error: “S3::inputFile(): Unable to open input file:”

    I am completely lost to what is causing this error, any help would be much appreciated!


  224. Arifcsecu(Bangladesh) says:

    Thanks Boss Reron.
    i have got more than 1000 errors here.

    Now this works fine.

    Again thank you.

  225. Loz says:

    I cant get this to work 🙁

    <?php echo $s3‐>getAuthenticatedURL(“evp-4b69f2e172842-e7015e5af93fdd8e3157715cb3c112db”, “video.mp4”, 120); ?>

    spits out this error

    Fatal error: Call to undefined function getAuthenticatedURL()

    any help appreciated.

    I used the firefox S3 plugin, and set EVERYONE to NO, and authenticated users to YES

    I dont understand whats going on here 🙁

  226. Heron says:

    Loz: did you require(‘S3.php’); at the top of your PHP code? Did you properly initialize the $s3 variable by creating an instance of the S3 class?

    If you’re trying to use it statically, you need to set the S3 access key and secret key with the appropriate static method, and access it via S3::methodname() rather than with a variable.

    I can give you more help, but you’ll have to show more of your code than just one line. Feel free to e-mail me at if you need more help.

  227. jbq says:

    I created a bucket with s3cmd with both lower and uppercase chars, and when using your class I kept getting the following error until I removed the strtolower($bucket) call in the S3Request constructor: S3::putObject(): [NoSuchBucket] The specified bucket does not exist

  228. jbq says:

    OK I understand now, I have to call putObject() with empty bucket name and rather have the bucket name included in the URI. This way I don’t come across the problem and don’t get signature mismatch. There must be a bug somewhere in the S3 class when bucket parameter is defined and “Host:” is used

  229. Loz says:

    Hi Heron

    Thanks for replying.

    yes, this is the code I used at the top of my php page.

    XXXX = my amazon key stuff.

    require_once ‘S3.php’;
    $s3 = new S3(“$S3AWSID”, “$S3AWSSECRET”);

    This is the ebook I followed.

    The S3.php page is in the same directory as that of the php file that contains that information at the top of the page.

    Thanks for your email, i will send you a note shortly.



  230. chandrav says:

    if the bucket name has “/” in it it does not create recursive directory.

    say you want to create a director test2 in test.

    $handle->putBucket(“test/test2”, ‘public-read’);

    this gives me an error.

    Is there a way to do this?

  231. Heron says:


    There’s no such thing as a ‘folder’ or ‘directory’ in S3 buckets. If you want to simulate directories, you need to prefix your object names with the virtual directory name. For example, suppose you want to pretend there’s a folder named “test”. What you need to do is to name all of the objects that go in that folder such that they begin with “test/”. So if “foobar.html” goes in the “test” folder, you’d put it in your S3 bucket with the name “test/foobar.html”.

    Again, S3 has no concept of folders or directories. S3Fox and tools like it simply pretend there are folders, based on filename prefixes.

  232. Scott says:

    I seem to be bumping into the same SSL problem everyone else. I’m running tests on a local XAMPP install- I can upload files just fine and verified them via S3Fox. However, any attempt to list buckets or put buckets or do anything else drops the following error:

    Warning: S3::listBuckets(): [60] SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed in C:xampphtdocsS3NativeS3.php on line 91

    I tried setting $useSSL=false in the constructor, but it didn’t change anything. Ideas?

  233. ian says:


    For a workaround, edit the S3 class, and go to ~ line 1222.

    You have to disable Peer Verification (replace 1, with 0), in this line:

    curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 1);

    Hope this helps.

  234. Andrew Slater says:

    Same SSL3 etc problem — this is new as of yesterday for me. Nothing changed on my end, so I’m guessing that maybe something changed on Amazon’s end that we need to change in the code?


  235. Andrew Slater says:

    @ian — that did it. Looks like I came to the site here about 40 seconds too early!

  236. a.yamanoi says:

    Thanks for a great class!

    In the copyObject(…) function,

    -$rest->setAmzHeader(‘x-amz-copy-source’, sprintf(‘/%s/%s’, $srcBucket, $srcUri));
    +$rest->setAmzHeader(‘x-amz-copy-source’, sprintf(‘/%s/%s’, $srcBucket, rawurlencode($srcUri)));

    I use above code for multibyte filenames.

  237. mconnors says:

    I could really use some help. After having a massive directory tree of a million files uploaded to a bucket with AWS import, I can’t see any folders with anything other than cyberduck or s3fox. All other attempts won’t show the folders including this class. Any ideas?

  238. rvera10 says:

    I’m new to php and was wondering if I could get some help in how to get a file from s3 back to the browser. I believe I need to use the getObject(‘bucketname’, ‘uri’) function. But I do not want to save it to the server and then open it open to flush it out to the browser. I would like to take the return Object or Reference to what’s being returned in the getObject and probably use something like fpassthru(). But not sure if this is the correct way to do this.

    Any help will be greatly appreciated.

    Thanks in advance.

  239. Heron says:


    If you want the file on S3 to be the only thing passed to the user, you could simply redirect them to the URL of the object on S3 (perhaps using an authenticated URL if it’s a private file).

    If you want the file embedded in other output, though, well it depends. If it’s an image, then again you can use the URL straight to the S3 object as the image’s URL in the HTML output. If it’s a text file, then S3 isn’t really a good place to store it.

  240. Heron says:


    That sounds like something you should contact the S3 team about (try the forums at I don’t know why that might be happening.

  241. Todd says:

    Finally needed to look into S3, and this class has saved me a lot of time. Thanks!

  242. Gnome says:

    S3 versioning please.
    v0.4.0 is very old.

  243. pawan kumar says:

    Hello Sir

    This script work fine for small files. but when i upload large file e.g. 80MB, The file is moved to s3 ok but PHP gives up waiting to and so does not execute the rest of the code.Can any one provide me the solution for this ?

  244. Chris Dean says:

    Just seen this and admit I’ve not read anything about it fully yet, but I was thinking of building a kohana module around cloudfusion ( to do something similar…

    Not sure if it’d be of benefit to you to investigate cloudfusion integration for this project?


  245. RJ says:

    Is there a way by which i can generate url for files to be downloaded directly…..currently the url generated through getAuthenticatedUrl opens the image in d clients browser…i want it to be downloaded…..


  246. John H. says:

    Is there a way we can just list folders in a bucket? I am having a hard time figuring that out.

  247. Andrew says:

    Hi there,
    great class – however I can’t test it and I think it is a problem that I ran my server localy via MAMP PRO and a special port on my Mac. The class creation doesn’t work and the page loads endless without any error message.

    Can anyone share a tip what this could be!

    Thanks a lot for any help!

  248. Andrew says:

    Problem solved – my Mac didn’t allow the outbound call to S3 Server 🙂

  249. Terenn says:

    Thank you very much!

    EDIT: “Your comment looks like it could be spam.
    Please try altering your input.”

  250. Frank Koehl says:

    Just wanted to ping this post, make sure that development on this class wasn’t dead. Has the Amazon API simply stabilized?

  251. Ty Wangsness says:

    I was having a hard time getting this class to let me create buckets in the US West or Asia Pacific regions. The location string for US West is “us-west-1” and the location string for Asia Pacific is “ap-southeast-1”. Then you have to edit one line in this class due to Amazon being case sensitive:

    Line 213:
    $locationConstraint = $dom->createElement(‘LocationConstraint’, strtoupper($location));


    $locationConstraint = $dom->createElement(‘LocationConstraint’, $location);

  252. Heron says:

    @Frank Koehl: The API has not changed very much at all since the last update, with the exception of S3’s recently released RRS feature. If there’s a big demand for it I’ll pester Don to update the code 😉

    The code as it stands is quite stable and easy to use.

  253. Just wanted to say thanks for this class and also curious as to if you have plans to update it to manage the recent bucket policy additions to S3. Thanks again.

  254. vinod says:

    Hey Bro Thank you very much, among the libraries out there I think yours is the best. Its simple and has nice documentation and example good enough for me…

  255. Anoop says:


    I have to transfer my files from amazon s3 to my new sever . My bucket has large number of images and videos. Is it possible to transfer all my files to my new server using php script ?

    Thanks in Advance,


  256. Bruce says:

    When I pass a short string to putObject it takes minutes to complete. The same data passed in a file completes immediately.

    $data = “0123456789n”;
    $put = $s3->putObject($data, $bucket, $path, S3::ACL_PUBLIC_READ);

    $data = $s3->inputFile(“file.txt”);
    $put = $s3->putObject($data, $bucket, $path, S3::ACL_PUBLIC_READ);

  257. Andris says:

    If ther’s a backslash in the path of an object (not a slash which separates folders) then copyObject fails with ” The request signature we calculated does not match the signature you provided.”. A minor problem, and I’m looking into fixing it, but if anyone has a suggetion… Thanks.

  258. Andris says:

    I found the solution… You have to urlencode() the object’s name

  259. Jaka says:

    This is really poorly designed.

    All the objects share a single pair of credentials. They don’t behave as you would expect objects to behave at all.

  260. Don says:


    Yes – I’m busy with a rewrite that will. But if you don’t like it, use something else 🙂

  261. vince says:

    I like the profuseness of the comments.

    it seems that this class doesn’t support bucket names with capital letters, even though S3 will treat two buckets such as ‘SomeBucket’ and ‘somebucket’ as distinct, and will let them coexist as separate entities.

    If a user tries something like $s3->putBucket(‘SomeBucket’);
    a bucket with a name of ‘somebucket’ will created in S3.

    If a user has created a bucket with a name of ‘SomeBucket’ via some other means (like the AWS Management Console), it will be inaccessible with this class.

  262. aaron francis says:

    First off, thanks so much for this class. Works like a dream.

    Quick question though, every time I try to use the prefix option on getBucket I get a SignatureDoesNotMatch error.

    $contents = $s3->getBucket(“bucket”) – works
    $contents = $s3->getBucket(“bucket”, “a”); – error

    Warning: S3::getBucket(): [SignatureDoesNotMatch] The request signature we calculated does not match the signature you provided. Check your key and signing method. in /Applications/MAMP/htdocs/assets/classes/S3.class.php on line 126

    I’ve tried urlencoding different things and also sorting headers lexicographically. I just can’t seem to get it down. Any ideas?

    • Peter says:

      Hey aaron francis,

      I had the same problem. Googled for a bit but could not find an answer. My problem was that my bucket name was “SomeBucket”. When I changed it to “somebucket” everything worked!

  263. This class is awesome, use it all the time.

    It would be great if it was updated with the invalidation and default-object features amazon has added lately.

  264. Morne says:

    Hi there, how can I bulk upload files in a directory to my s3 bucket?

    Can I just get all file names from the directory and put it in to an array, then use the putObject() to upload?

  265. Rubén Martínez says:

    Sorry, but the site documentation doesn’t explain this clearly, so I hope you can excuse my ignorance.

    I’m testing using S3 as a remote backup for critical files on a PHP application. I’m developing in Wamp (Win XP). When I try to run your example.php sample, I got the [60] error about no certificates.

    I want to use SSL, so I’ve downloaded the cert & key generated by the Amazon control panel (cert-****.pem & pk-****.pem), and then add the instructions to getResponse():

    if (S3::$useSSL) {
    curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 2);
    curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 1);

    // New lines
    // Certificate & key file in pem format
    curl_setopt($curl, CURLOPT_SSLCERT, S3::$certFile);
    curl_setopt($curl, CURLOPT_SSLKEY, S3::$pkeyFile);
    // End new lines

    but the error persists.

    On the other side, downloading a Certificates bundle from the curl site and adding the next line instead works:

    // CA bundle file
    curl_setopt($curl, CURLOPT_CAINFO, “C:wampwwws3certscacert.pem”);

    I think that the certificate is obtained automatically and verified against a root certificate present in the bundle, so it works. Still, why wasn’t the first option successful?

    Shouldn’t CURLOPT_SSLCERT bean an inmediate aceptance? Or are those certificates mean’t for other things, and the ssl is using the generic amazon certificate for *

    That would explain why it worked like that, and yet, what are the those cert and key for?

  266. chylvina says:

    It’s great.Thank you very much!

  267. Burton Smith says:

    Thank you so much for this class.

    I don’t know how I could have ever fulfilled my client’s expectations for backup to S3 unless I had a pre-written class.

    Much Gratitude!


  268. Stefan says:

    Hi all!

    Im’ new to S3.
    Please let me ask a stupid question.
    This class seems to exist quite a long time now.
    Perhaps it was written at a time when Amazon had no PHP API?

    Why don’t you all use the Amazon PHP API?

    Is it not as easy to use as this one?

    Thanks for your coments!

  269. Mike says:

    To me the original Amazon PHP SDK seems too bloated. And it’s autoloader conflictet with that of my framework. So why use such a beast if there’s a lean class like this?

  270. Thanks a lot for this.

    However I found that the Content-Type would not be set correctly. I tried various things and couldn’t figure it out then decided to make some changes in the class:

    In your function function putObject I added this line:

    <span class="source-variable">$input['type'] = self::getMimeType_new_from_jim(<span class="source-variable">$uri);

    Just before this line:

    if (!isset(<span class="source-variable">$input['type'])) {

    Then at the bottom just before the final } I added this:

    public static function getMimeType_new_from_jim(<span class="source-variable">$file) {
            static <span class="source-variable">$exts = array(
                'jpg' => 'image/jpeg',
                'gif' => 'image/gif',
                'png' => 'image/png',
                'tif' => 'image/tiff',
                'tiff' => 'image/tiff',
                'ico' => 'image/x-icon',
                'swf' => 'application/x-shockwave-flash',
                'pdf' => 'application/pdf',
                'zip' => 'application/zip',
                'gz' => 'application/x-gzip',
                'tar' => 'application/x-tar',
                'bz' => 'application/x-bzip',
                'bz2' => 'application/x-bzip2',
                'txt' => 'text/plain',
                'asc' => 'text/plain',
                'htm' => 'text/html',
                'html' => 'text/html',
                'css' => 'text/css',
                'js' => 'text/javascript',
                'xml' => 'text/xml',
                'xsl' => 'application/xsl+xml',
                'ogg' => 'application/ogg',
                'mp3' => 'audio/mpeg',
                'wav' => 'audio/x-wav',
                'avi' => 'video/x-msvideo',
                'mpg' => 'video/mpeg',
                'mpeg' => 'video/mpeg',
                'mov' => 'video/quicktime',
                'flv' => 'video/x-flv',
                'php' => 'text/x-php'
            <span class="source-variable">$ext = split("[/\.]", <span class="source-variable">$file) ; 
            <span class="source-variable">$n = count(<span class="source-variable">$ext)-1; 
            <span class="source-variable">$ext = <span class="source-variable">$ext[<span class="source-variable">$n]; // <span class="source-variable">$ext is now the file extension
            if (isset(<span class="source-variable">$exts[<span class="source-variable">$ext])) {
                return <span class="source-variable">$exts[<span class="source-variable">$ext];
            } else {
                return 'application/octet-stream';

    It works well for me. I release the above code to the public domain free for all to use.

  271. Mind says:

    i get this error today after many month i use backup:

    PHP Warning: S3::listBuckets(): [6] Couldn’t resolve host ‘’ in S3.php on line 90

    i can’t understand why…can u help me?

  272. Hans Malkow says:

    Thank your for your really helpful class – it works great!

    @Mind: If you are located in Europe, you should try “” instead of “”

  273. jonkraftsmall says:

    Thanks for info

  274. Michael says:

    Thank you.

  275. Dave says:

    Hi am am fairly new to PHP and I am trying to generate a temporary autheticated URL for Amazon S3, using your S3 PHP class and the method below suggested by Wilson Mattos, but I keep getting:

    Fatal error: Call to undefined function add_action() in F:wampwwwtestS3.php on line 50

    Any clue to what I may be doing wrong? And thank you for your time and effort in developing this.

    require_once ‘S3.php’;
    $s3 = new S3(“$S3AWSID”, “$S3AWSSECRET”);

    <?php echo $s3‐>getAuthenticatedURL(“BUCKET”, “FILE”, EXPIRE_TIME); ?>

  276. Jeff Harden says:


    Firstly excellent Class, worked first time, so thankyou very much!

    I am having a issue (that i dont think is due to the class) with intermittent connection problems and was wondering whether anybody experienced this?

    I get the Warning:

    PHP Warning: S3::putObject(): [7] couldn’t connect to host in /libraries/S3/S3.php on line 358

    As said this doesnt happen all the time, it seems to be random, does anyone have any ideas? I would have thought that S3 was highly available so the cant connect to host doesnt really make sense!



  277. Richard Soares says:

    Thoughts on why S3 Keys are not authenticating. Getting this error back from S3: [InvalidAccessKeyId] The AWS Access Key Id you provided does not exist in our records.

    Overview: Added the s3.php class into CodeIgniter Libraries and modified all instance of class name to My_S3. Class seems to be working. When calling any function in teh class I get the above Access Key warning. Sample code below:

    $bucket_list = $this->s3->listBuckets( true );

    Verified the Access Key and Secret Key are working using other tools such as Panic Transmit S3 protocol and SFox for FireFox.

    Using SC Class 0.4.0 – $Id: S3.php 47 2009-07-20 01:25:40Z don.schonknecht $

    Where should I look for the bug?

    Thanks Richard

    • Anil Singh says:


      How did you resolve this issue? I am getting the same error and have not been able to move forward. Any pointer will be appreciated.

  278. Richard Soares says:

    Code Igniter / S3.php Library bug FIXED:

    The fix to my previous post on 28th Apr 2011 is as follows:


    When CI Library classes extend tot he core CI_Controller class.

    Hope this saves someone time.


  279. Roman says:


    There’s bug in getBucket() method.
    If 0 is passed to this method as @maxKeys parameter, do while loop will circle until memory_limit or max_execution_time is reached.

    This is how we fix it:
    public static function getBucket($bucket, $prefix = null, $marker = null, $maxKeys = null, $delimiter = null, $returnCommonPrefixes = false) {
    if ($maxKeys == 0) return true;


  280. greg says:

    A cheap hack for everyone trying to get reduced redundancy to work:

    $rest->setAmzHeader(‘x-amz-storage-class’ , ‘REDUCED_REDUNDANCY’);
    below the
    $rest->setAmzHeader(‘x-amz-acl’, $acl);
    in the bottom of the putObject function

  281. John says:

    this is really great script. Thanks webmaster.

  282. Oscar says:

    This is a great class!
    Very useful to me and my requirements, thanks to share it!

  283. Mike says:

    Great work thank you.
    Can i get all the folder names inside the bucket and also all the files inside particular folder.
    I have tried to use getBucket() method with prefix option, But could not succeed.
    $contents = $s3->getBucket(‘bucketName’,, ‘this/folder’);

    Please let me know.


  284. Mike says:

    Did anyone get a chance to look into my comment.
    Please help. How to use prefix with getBucket method.
    contents = $s3->getBucket(‘bucketName’, ‘this/folder’);
    Bucket location in S3: /bucketName/this/folder


  285. This is a great bit of code, thank you. The only slight issue I came across (as mentioned by some other respondents) was that bucket names get cast to lowercase meaning that it can never find an AWS bucket that uses mixed or all upper case.

    But the code is great. Thanks for sharing!

    • Don says:

      This was changed in the master branch… According to the API docs bucket names were never meant to include uppercase characters, but for some reason it was allowed.

  286. Lua says:

    I have the same problem of Troy. Here the detail:

    A PHP Error was encountered

    Severity: User Warning

    Message: S3::putObject(): [55] select/poll returned error

    Filename: libraries/S3.php

    Line Number: 410
    A PHP Error was encountered

    This error appears randomly and now more and more often (the class has worked correctly for me for six moths).

    I use ubuntu server 7.04, php 5.2.3 and the last class version. The uploaded files are about 70kb-700kb.

    Any idea to solve my issue?

    Ty in advance!!

  287. Matt Robinson says:

    Don, I just wanted to thank you for this. It Just Works™. Perfect!

  288. Simon says:

    Brilliant – simple, easy to use, just works. Thanks!

    I’m currently moving over 300,000 photos in the ZooChat galleries from my web host to S3 using your class – works well.

    I also wrote a simple HTML based file manager for S3 (as a vBulletin forum plugin!) again, using your class. Dead easy.

    Thanks again.

  289. ABCD says:

    This class does not support using french characters.

    When trying to copy from a folder called “Àbcdefg” I get an invalid signature error
    However, the same action from the folder “Abcdefg” and “abcdefg” work perfectly fine

  290. Chris says:

    The class seems to be updated with some of the suggestions here. So downloading the latest version will do the job without the need for any modifications. I am glad i came across it,thank you so much.

  291. Thomas Isaksen says:

    Hey Don,

    Just wanted to thank you for this great code. It’s exactly what I need and nothing more. 🙂

    Keep up the good work!

  292. Kanwaljit Singh Nagra says:

    – Recently started learning about EC2 and S3.
    – I tried using the Amazon PHP SDK example, but had little luck.
    – This is so simple and did the trick perfectly! Thank you 😀

  293. Steve says:

    This is great. Does it support the multi-object delete capability that was announced late last year?

  294. Pingback: Creare una galleria fotografica con il Cloud Storage di - parte 8 -

  295. Pingback: Creare una galleria fotografica con il Cloud Storage di - parte 8 -

  296. Pingback: PHP顶级类库 – Linux Life Liux Study

  297. Wayne says:

    Is this still being supported? i really like it an was wondering if you will ever add support for the if_object_exists:: function

    • don says:

      It is still alive and kicking – see the github page. You should use getObjectInfo() which does a HEAD request.

  298. Lio Eters says:

    Thank you for this useful S3 class! Single file, and so simple to use.

    I had difficulty getting a “force download” link of an object. By default, the links are opened by the browser in another tab, and it tries to play/view them. To get the link to download, I managed to solve it a couple of ways.

    First method was to copy an object over itself and rewrite the header, setting the content type and disposition. But this had to be done for every file.

    The second, better method was to patch the function getAuthenticatedURL to receive a header array, so I can include it in the GET request. I got the idea here:

    Then, I was able to specify:

    $url = $s3->getAuthenticatedURL($bucketName, $filename, 600, false, false, array(‘response-content-disposition’ => ‘attachment’));

    This gets me a URL that will force the browser to download, instead of opening it. The solution wasn’t easy to find, so I thought I’d share it here.

Comments are closed.