We had some peculiar errors with a web application running under Sun Glassfish the other day. If you're running Glassfish on linux, chances are the user process has a limit of 1024 open files. This may sound like a lot, but if you were running a web server, eg Glassfish and multiple applications under it, you could soon run into 'interesting issues' that can appear in any part of the process. In a real world example, our app started getting broken pipes for SMTP connections (and more nasty issues with Glassfish locking up). Looking into it further it appears a lovely programmer coded the system to read resource files from the file system, but never bothered to close the files. If you read around the Internet pretty much best practice says to always close any file streams that are opened for either reading or writing.
So I thought I'd do a program to test it out. Here it is....
/**
* Create a wad of files for testing open files limit on linux
* args : I = input stream only (is the default)
* : O = output stream only
* : X = both. Generates output files then does input files from output files
* @author jamesb
*
*/
public class InputStreamTest
{
private static int NUMFILES=5000;
private static byte[] DATA = "THIS IS A TEST OF FILEINPUT OUTPUTSTREAMS AND THE EFFECTS OF NOT CLOSING THEM".getBytes();
public static void main(String[] args) {
String whatToDo="I";//I meaning input streams
boolean closeFiles = false;
if (args != null && args.length > 0)
whatToDo = args[0].toUpperCase();
if (args.length> 1) {
if (args[1].equalsIgnoreCase("Y"))
closeFiles = true;
}
InputStreamTest tester = new InputStreamTest();
if (whatToDo.equals("CLEANUP")) {
tester.cleanup();
System.exit(0);
}
System.out.println("Waiting 10 seconds for you to find the process ID");
synchronized (tester)
{
try {
tester.wait(10000);
} catch(InterruptedException e) {
System.err.println("All over red rover");
}
}
if (whatToDo.equals("O") || whatToDo.equals("B")) {
tester.doOutputStreams(closeFiles);
System.out.println("All files written. Waiting 2 seconds");
synchronized (tester)
{
try {
tester.wait(2000);
} catch(InterruptedException e) {
System.err.println("All over red rover");
}
}
}
if (whatToDo.equals("I") || whatToDo.equals("B")) {
tester.doInputStreams(closeFiles);
}
System.out.println("NOW WAITING 10 MINUTES. GO AND CHECK OPENFILES. YOU HAVE PLENTY OF TIME");
System.out.println("HINT ON LINUX /usr/sbin/lsof etc etc etc");
int TENMINUTES = 1000 * 10 * 60 * 10;
synchronized (tester)
{
try {
tester.wait(TENMINUTES);
} catch(InterruptedException e) {
System.err.println("All over red rover");
}
}
}
private File[] getFiles() {
String tmpDir = System.getProperty("java.io.tmpdir");
System.out.println("java.io.tmpdir=" + tmpDir);
File dir = new File(tmpDir);
File[] files = dir.listFiles(new FilenameFilter() {
public boolean accept(File arg0, String arg1)
{
if (arg1.startsWith("InputStreamTest_"))
return true;
return false;
}});
return files;
}
public void cleanup() {
File[] files = getFiles();
for (File f : files) {
System.out.println("Deleting file [" + f.getName() + "]");
f.delete();
}
System.out.println(files.length + " files deleted");
}
public void doOutputStreams(boolean closeFiles) {
System.out.println("Generating files");
String tmpDir = System.getProperty("java.io.tmpdir");
System.out.println("java.io.tmpdir=" + tmpDir);
long timeStamp = System.currentTimeMillis();
System.out.println("Timestamp = " + timeStamp);
for (int i = 0; i <> try {
String fileName = tmpDir + File.separator + "InputStreamTest_" + timeStamp + "_" + i + ".dat";
System.out.println("Creating file [" + fileName + "]");
File file = new File(fileName);
FileOutputStream os = new FileOutputStream(file);
os.write(DATA);
os.flush();
if (closeFiles) {
System.out.println("Closing file " + file.getName());
os.close();
}
} catch (IOException e) {
System.err.println("Unable to write file");
e.printStackTrace(System.err);
throw new RuntimeException(e);
}
}
System.out.println(NUMFILES + " created");
}
public void doInputStreams(boolean closeFiles) {
System.out.println("Reading files");
File[] files = getFiles();
if (files == null || files.length == 0)
System.out.println("No files to read!!");
byte tmpStore[] = new byte[DATA.length];
for (File f : files)
{
try {
System.out.println("Reading file " + f.getName());
FileInputStream is = new FileInputStream(f);
is.read(tmpStore);
if (closeFiles) {
System.out.println("Closing file " + f.getName());
is.close();
}
} catch (IOException e) {
System.err.println("Unable to read file");
e.printStackTrace(System.err);
throw new RuntimeException(e);
}
}
System.out.println(NUMFILES + " have been read");
}
}
Run it with these options
> java testing.InputStreamTest B : This will write 5000 files and then read them, it will not close the files
> java testing.InputStreamTest I : This will read the 5000 files not closing after each read
> java testing.InputStreamTest O : This will write 5000 files not closing after each file
> java testing.InputStreamTest B Y : This will read and write the files, closing after each
> java testing.InputStreamTest CLEANUP : Cleanup all the files
Where we don't do the closing of the stream I am able to get the reader side to break almost all the time with the error Too Many Open Files. It is a bit harder under linux and does depend on system speed and how often garbage collection occurs. But it can be broken!
How can one fix linux? Use the ulimit command. Under normal linux this will be set to 1024. Type ulimit -n to see what it is. I'm not sure yet how to increase the ulimit though. Still working on it