Corrupt ZIP file if calling Save twice

I am using DotNetZip 1.9.6 in my application which uses a file structure similar to e.g. *.docx: Zip file containing XML files.
Now every module of the application can store such XML files to my custom file management and on "save" they are serialized to streams which are then saved to the Zip file via DotNetZip.
To update the entries I use ZipFile.UpdateEntry(path, stream). This works fine and the first time I save my file via calling ZipFile.Save() everything works.

But If I do this a second time (first some UpdateEntrycalls then Save) on the same instance the Zip file is corrupted: The file structure and meta-data (e.g. uncompressed size of each file) is still there, but all files are 0 byte in compressed size.

If I create a new instance from the just saved file after saving everything works fine, but shouldn't it be possible to avoid that and "reuse" the same instance?

The following example (also see https://dotnetfiddle.net/mHxEIy) can be used to reproduce the problem:

using System.IO;
using System.Text;

public class Program
{
    public static void Main()
    {
        var zipFile = new Ionic.Zip.ZipFile();

        var content1 = new MemoryStream(Encoding.Default.GetBytes("Content 1"));
        zipFile.UpdateEntry("test.txt", content1);

        zipFile.Save("test.zip"); // here the Zip file is correct
        //zipFile = new Ionic.Zip.ZipFile("test.zip"); // uncomment and it works too

        var content2 = new MemoryStream(Encoding.Default.GetBytes("Content 2"));
        zipFile.UpdateEntry("test.txt", content2);

        zipFile.Save();  // after that it is corrupt
    }
}

To run this you need to add the "DotNetZip 1.9.6" NuGet package.

After the first save, this is what you get:
Correct
and after the second save:
Corrupt

Jon Skeet
people
quotationmark

This looks like it's a bug in the library, around removing an entry. If you just remove an entry and then save again, it correctly removes the file.

However, if you remove an entry and then add another one with the same name - which is what UpdateEntry is documented to do if the entry already exists - the old entry appears to be used instead.

The reason you're ending up with an empty file the second time is that the original MemoryStream is being read again - but by now, it's positioned at the end of the data, so there's no data to read. If you reset the position to the start of the stream (content1.Position = 0;) it will rewrite the original data. If you modify the data within content1, you end up with invalid compressed data.

The only workaround I can immediately think of is to keep your own map from filename to MemoryStream, and replace the contents of each MemoryStream when you want to update it... or just load the file each time, as per your existing workaround.

It's definitely worth filing a bug around this though, as it should work as far as I can tell.

people

See more on this question at Stackoverflow