I have a large (~4GB) text file written in Multimarkdown format, and I would like to convert it to HTML.
I tried:
use strict;
use warnings;
use File::Map qw (map_file);
use Text::MultiMarkdown qw (markdown);
my $filename = shift // die;
map_file (my $text, $filename);
print markdown($text)开发者_运维问答;
but it still chokes memory.
You need to process the file in chunks, making sure the chunks end in ignorable whitespace (so as not to split lists and tables etc).
Provide more information regarding the structure and contents of the file to help us give you other useful pointers.
I notice that Discount manages to tolerate about 100 MB. Pandoc seems to tolerate about 20 MB. Neither manages exactly the MMD markdown extensions, but both have their own equivalents for most of them.
Isn't this the main problem with this plan: What you are going to use to read the html? Chrome was managing to open 100 MB files, but was taking tons of memory to e.g. do a search or to cursor downward. Maybe you need a plan like Sinan's but one that produces separate html files for each chunk, ending with a hyperref to the next file....
精彩评论