javascript - how do node.js file descriptors work? -
i trying write program takes of files big enough gzip , compresses them using gzip part of zlib module , came across same error described in question error being node.js ran out of file descriptors , therefore unable open other files. , in same question describes fixing increasing number of file descriptors. in trying though ive come across couple of questions can't find answer to.
-
are file descriptors shared between parent , child processes? meaning fix error creating new child process programs use lot of file descriptors? type of child process matter? - how many file descriptors processes zlib use? in program trying zip 1695 files 673 failed know each file has @ least 2 file descriptors (1 readstream , 1 writestream)but limit far above how many zlib create?
is there way of changing file descriptor limit inside node.js javascript file? or can changed externally?can limit changed command line parameters can application specific?- is possible monitor how many file descriptors in use? might allow slow down creation of new read/write stream calls allowing older processes complete freeing file descriptors. preferably way within node.js can integrated node javascript file
more example purposes here code program
var errors=0; function compressfile(file,type){ if(type.indexof('gzip')>=0){ fs.stat(file,function(err,stat){ if(!err){ if(stat.size>1000){ var gzip=zlib.creategzip(); var compiled=fs.createreadstream(file,{autoclose:true}).on('error',function(err){ console.log(file,1); //console.log(err); }); var compressed=fs.createwritestream(file+'.gz',{autoclose:true}).on('error',function(err){ console.log(file,2); errors++ //console.log(err); console.log(errors); }); compiled.pipe(gzip).pipe(compressed); } }else{ console.log(err); } }); }else{ console.log('not supported compression'); } } function compressall(){ fs.readdir('./',function(err,files){ if(err){ console.log(err); }else{ for(i=0;i<files.length;i++){ var stat=fs.statsync('./'+files[i]); if(stat.isdirectory()){ var subfiles=fs.readdirsync(files[i]); subfiles=subfiles.map(function(value){ return files[i]+'/' +value; }); files.splice(i,1); array.prototype.push.apply(files,subfiles); i--; }else if(stat.size<1000){ console.log(files[i],stat.size); files.splice(i,1); i--; }else if(path.parse(files[i]).ext==='.gz'){ files.splice(i,1); i--; }else{ compressfile(files[i],compress); } } console.log(files.length); } }); }
as said before attempted run 1695 files through , received 673 errors running out of file descriptors somewhere around 1000 files being zipped
update new understanding of how file descriptors relates os see questions, 1,3, , 4 don't apply node.js im still wondering on 2 , 5. how many zlib use , there way monitor file descriptors?
in general, questions have seem bit random, might need try , learn more file descriptors in general. also, windows not have file descriptors such, speaks file descriptors means else on windows.
but, answer questions directly:
2) if mean node.js built-in zlib class, not use file descriptors @ all. if mean generically starting external process, default node.js creates pipe each of stdin, stdout, stderr. means momentarily create 6 file descriptors, 3 of them closed parent process - 3 file descriptors per external process.
5) can see open file descriptors process in unix systems doing fs.readdirsync("/proc/self/fd")
. however, since seem on windows, not , i'm not right person know if node.js wraps usable api on windows.
the example code have written creates 2 file descriptors per compressed file , no more. solution not gzip them in parallel (which horribly inefficient anyway), instead decide on reasonable degree of parallelism , run many compressions in parallel.
Comments
Post a Comment