I'm attempting to run through a list of direct download links using the code below. The code runs and downloads the first file just fine, as it moves to the second file (second link in the list), it correctly creates the new file and initiates download, but stops at 0 bytes and does not continue.
I've attempted to run through with breakpoints and similar, and all data looks correct. I've moreover confirmed it is possible to download multiple files in rapid succession without any issues from the website in question.
Any help, feedback or suggestions would be greatly appreciated!
foreach (string s in links)
{
using (WebClient w = new WebClient())
{
try
{
Console.WriteLine("Attempting to download " + s);
w.OpenRead(s);
string content = w.ResponseHeaders["Content-Disposition"];
string filename = new ContentDisposition(content).FileName;
w.DownloadFile(new Uri(s), _directory + filename);
Console.WriteLine("Downloaded " + filename);
}
catch(WebException e)
{
Console.WriteLine(e.Message);
}
}
}
Furthermore, downloading directly, i.e. by using the below, works fine.
using (WebClient w = new WebClient())
{
w.DownloadFile(new Uri(downloadLinks[0]), _directory + "test");
w.DownloadFile(new Uri(downloadLinks[1]), _directory + "test1");
w.DownloadFile(new Uri(downloadLinks[2]), _directory + "test2");
w.DownloadFile(new Uri(downloadLinks[3]), _directory + "test3");
}
Thanks!
I suspect the problem is this line:
w.OpenRead(s);
That's returning a Stream
that you never close. Now you could just close it... but it would be better to use it, and not bother with the DownloadFile
call:
using (Stream responseStream = w.OpenRead(s))
{
string content = w.ResponseHeaders["Content-Disposition"];
string filename = new ContentDisposition(content).FileName;
using (Stream fileStream = File.Create(filename))
{
responseStream.CopyTo(fileStream);
}
Console.WriteLine("Downloaded " + filename);
}
See more on this question at Stackoverflow