I don't think its my storage controller, as I've copied multi-GB files around on the server locally without a problem. It gets 16MB/sec which is pretty crappy but at least its consistent. I tried ESEUtil and doesn't do the memory thing, but its not very fast. ![]() I see this fast/slow/fast/slow behaviour in the FTP client's transfer rate numbers, and similar behaviour in the network So I used Filezilla as my client, and I see similar high memory usage (although it goes up to 75% instead of 99%), and the transfers start out fast, and then gets really slow. Unfortunately the Windows FTP client wants to use a Temp file on C:\ for its work, which won't work since I don't have that much free space on C. If I then try to copy the 44GB VMDK file to the server via Windows File Sharing, the server eventually gets all of its memory consumed behind the scenes and the file transfer slows to a crawl. ![]() I thought it was a memory leak in VMWare Converter, but doing the conversion on a different 32-bit server running Server 2008 doesn't have the problem, as long as I do the conversion to the 32-bit server's local disk. I discovered that in the process of copying the 44GB VMDK file to the server that it runs out of memory. I have VMWare server 2.0 running on it, and I want to do some P2V conversions so I can test an application's service pack. I have a 64-bit server, Xeon 5405, with 14GB memory, and a Areca 1680 SATA storage controller, that I use for backups, NMS, and testing. Copying large file to remote server causes it to run out of physical memory
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |