I have always been in the BinaryReader or Stream to have a method to read array in quick way. Since MS has introduced the MemoryMappedFiles there has been one class MemoryMappedViewAccessor that has a method which is called ReadArray to read arrays.
Has someone an idea how this method works. Currently it's horrible to read arrays from a binary stream. You first have to read the stream as bytes and them copy the byte stream into the target array format. It would be nice to have it in one step.
I tried to enable to .NET-Framework source-stepping in VS2010 but it doesn't work.
Currently I read my data for several primitive array datatypes, like this:
public开发者_C百科 static float[] ReadSingles(this Stream stream_in, int count)
{
FileStream fileStream = stream_in as FileStream;
float[] Data = new float[count];
if (count == 0) return Data;
fixed (float* dataptr = &Data[0])
{
if ((fileStream == null) || (StreamExt.Mode == StreamExtMode.Conventional))
{
byte[] bts = ReadBytes(stream_in, count * sizeof(float));// stream_in.ReadBytes(count * sizeof(float));
Marshal.Copy(bts, 0, new IntPtr(dataptr), bts.Length);
}
}
return Data;
}
Is there a good answer to this.
Thanks Martin
It boils down to an extern
method, so in short: we can't see directly. It isn't done in managed code, but by the CLI host:
[MethodImpl(MethodImplOptions.InternalCall)]
[ReliabilityContract(Consistency.WillNotCorruptState, Cer.Success)]
private static extern unsafe void PtrToStructureNative(
byte* ptr, TypedReference structure, uint sizeofT);
Re your existing code; IMO, the issue here is your choice to allocate count * sizeof(float)
If your intention is to avoid the additional byte[]
overhead, I would create a smaller buffer (say, Max(count, 1000) * sizeof(float)
) and use a loop, filling in Data
progressively.
Also, if you don't need all the floats at once, consider an iterator block instead, which will slash the memory overheads here (but will mean you only have access to the items as a sequence, not random access).
精彩评论