

The "name" parameter needs to be unique for that user until the entire file has been uploaded.Pieces are uploaded that the server will rebuild the attachment and give you the attachment id.


You need to make sure that you get a http responseĬode 200 (success) after each chunk, and resend any chunks that fail accordingly. The script above doesn't have any error handling.It's the chunk number that will order the pieces correctly on the server. You canįor instance begin by sending the second chunk with chunk=1, followed by the first one with chunk=0. as long as you number the chunks correctly. You can send chunks in any order that you want.One small, three of equal size or one big, one medium and one small. If you're sending three chunks it can be two big ones and The individual chunk sizes doesn't matter.We will get the attachment id only when the last piece has been uploaded.In this example there are a few things to highlight: Sending them individually like this: #!/bin/shĬurl -X POST -user "$api_key:x" -F -F name=bigfile.zip -F chunk=0 -F chunks=2 $server/attachmentsĪttachment_id=`curl -X POST -user "$api_key:x" -F -F name=bigfile.zip -F chunk=1 -F chunks=2 $server/attachments` In this example, we're taking bigfile.zip, and splitting into two files: bigfile.zip.00 and bifile.zip.01 and A unique string representing the attachment id. The total number of piecesĪttachment ID # String. The current piece between 0-(number of pieces -1)Ĭhunks # Integer. This is needed because we no longer can use the filename from theĬhunk # Integer. Request Request URL: /attachmentsįiledata: # The html multiplart file data Unlimited (well, limited by disk space) file size. Sending files in chunks will get around this and enable files of Proxies struggle with files larger than 2Gb. And some devices such as Microsoft ISA and TMG Upload fails, the entire file doesn't have to be retransmitted. When completed the server will rebuild the complete file. This only works with the htmlįorm based upload and works by you splitting a large file in smaller pieces (chunks) and sending the chunks This will enable you to divide and send a large file uploaded in smaller pieces. Please note with this response that it's not your normal JSON formatted type response. Example Request using curl with Pool ID #!/bin/bashĪttachment_id=`curl -X POST -user "$api_key:x" -F -F pool_id='partner-pool' $server/attachments` If you enter '-F Filedata=bigfile.zip' without the it means send the string "bigfile.zip"Īs Filedata, that won't send the data from the file. Please note in this example that curl syntax of means to load the data from the fileīigfile.zip. # Uploading the actual file and get attachment id for each fileĪttachment_id=`curl -X POST -user "$api_key:x" -F $server/attachments` This will also send the files separate from the message, and we'll just include references to the files when It will enable us to send binary data as content, and the webserverĬan intercept this before the web application sees it and so on. Which would lead to the raw data being transmitted like this: Content-type: multipart/form-data boundary=AaB03xĬontent-disposition: form-data name="Filedata" filename="filename.ext" &linput type="file" name="Filedata" filename="filename.ext"> The only real problem with it is that it doesn'tĬonform to the API standard way of using JSON for everything, and can lead to some cludges when you're This is a much more efficient way of sending files than using JSON based uploads, but notĪs efficient as using the Binary Upload method. Then this document outlines how it works. If you're currently using HTML Form Based Uploads and wish to continue (for now) Please use the Binary Upload Method instead. The HTML Form Based Upload is considered Legacy and was removed in LiquidFiles v3.7.
