Asked  7 Months ago    Answers:  5   Viewed   44 times

Well, this one seems quite simple, and it is. All you have to do to download a file to your server is:

file_put_contents("Tmpfile.zip", file_get_contents("http://someurl/file.zip"));

Only there is one problem. What if you have a large file, like 100mb. Then, you will run out of memory, and not be able to download the file.

What I want is a way to write the file to the disk as I am downloading it. That way, I can download bigger files, without running into memory problems.

 Answers

51

Since PHP 5.1.0, file_put_contents() supports writing piece-by-piece by passing a stream-handle as the $data parameter:

file_put_contents("Tmpfile.zip", fopen("http://someurl/file.zip", 'r'));

From the manual:

If data [that is the second argument] is a stream resource, the remaining buffer of that stream will be copied to the specified file. This is similar with using stream_copy_to_stream().

(Thanks Hakre.)

Tuesday, June 1, 2021
 
Ticksy
answered 7 Months ago
18

Acoording to RFC 2046 (Multipurpose Internet Mail Extensions):

The recommended action for an implementation that receives an
"application/octet-stream" entity is to simply offer to put the data in a file

So I'd go for that one.

Wednesday, March 31, 2021
 
Akdeniz
answered 9 Months ago
58

It's your GZIP compression. When you specify a content length but turn compression on, it gums everything up. It's happened to me a few times: try turning it off in your script.

Generally you'd turn it on with:

ob_start("ob_gzhandler");

...so just comment that line out. If that's not in your code, chances are there's a setting in either your php.ini file somewhere or your apache.conf/conf.d.

Hope this helps!

Wednesday, March 31, 2021
 
aslum
answered 9 Months ago
51

Starting with Java 7, you can download a file with built-in features as simple as

Files.copy(
    new URL("http://example.com/update/PubApp_2.0.jar").openStream(),
    Paths.get("C:/PubApp_2.0/update/lib/kitap.jar"));
// specify StandardCopyOption.REPLACE_EXISTING as 3rd argument to enable overwriting

for earlier versions, the solution from Java 1.4 to Java 6 is

try(
  ReadableByteChannel in=Channels.newChannel(
    new URL("http://example.com/update/PubApp_2.0.jar").openStream());
  FileChannel out=new FileOutputStream(
    "C:/PubApp_2.0/update/lib/kitap.jar").getChannel() ) {

  out.transferFrom(in, 0, Long.MAX_VALUE);
}

This code transfers a URL content to a file without any 3rd party library. If it’s still slow, you know that it is not the additional library’s and most probably not Java’s fault. At least there’s nothing you could improve here. So then you should search the reason outside the JVM.

Wednesday, August 4, 2021
 
Mirko
answered 4 Months ago
40
  • You're using uninitialized pointer, so it points out to nowhere. Initialize reply with NULL in your constructor.
  • You should connect reply after it was created (reply = manager.get(...)), not inside of your constructor.
  • QNetworkReply is never deleted by QNetworkManager as docs say:

Do not delete the reply object in the slot connected to this signal. Use deleteLater().

So you shouldn't call delete on QNetworkReply in finished slot.

  • In finished slot setting data to 0 will only set parameter value to 0, not your class member reply. It's an unneeded line of code. You should set your reply member to NULL instead.

Also you should consider writing to a file every time you get data chunk, as whole file will be buffered in memory in your current case. It may lead to huge memory usage of your software when file at pointed URL is big.

Wednesday, August 11, 2021
 
samayo
answered 4 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :
 
Share