Clearing cached memory?

Place to discuss Fedora and/or Red Hat
Post Reply
ZiaTioN
administrator
administrator
Posts: 460
Joined: Tue Apr 08, 2003 3:28 pm
Contact:

Clearing cached memory?

Post by ZiaTioN » Wed Oct 12, 2005 6:22 am

Is there a way to force this if the need came about? I know cached memory is similar to free memory in respect that the kernel will provide this cached memory to any process that requires it if the process asked for memory but still would like to know if there is a way to release the cache memory back to the OS so it is seen as free memory.

Master of Reality
guru
guru
Posts: 562
Joined: Thu Jan 09, 2003 8:25 pm

Post by Master of Reality » Wed Oct 12, 2005 7:03 am

there is a way to write all the cached memory and flush it all out... but i can't recall how and i've no idea why you would want to?

User avatar
Void Main
Site Admin
Site Admin
Posts: 5712
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Re: Clearing cached memory?

Post by Void Main » Wed Oct 12, 2005 8:27 am

ZiaTioN wrote:Is there a way to force this if the need came about?
I can't imagine the need ever coming about which is why the only way I know of to clear the IO cache is by rebooting.

ZiaTioN
administrator
administrator
Posts: 460
Joined: Tue Apr 08, 2003 3:28 pm
Contact:

Re: Clearing cached memory?

Post by ZiaTioN » Wed Oct 12, 2005 9:43 am

I can't imagine the need ever coming about which is why the only way I know of to clear the IO cache is by rebooting.
Yeah I was afraid of that. I can not imagine a reason either but was just a question I had. I have read a bit on a suggested bug in the 2.4 kernel that has never been recognized by the kernel authors as a bug but states that "Out of Memory" errors are reported when an application tries to run and there is not enough free memory to support this process. This in turn would mean that the cached memory was not effectively being used like it is suppose to be.

I am running a 2.6 kernel however but was just doing some testing. I have noticed that I sometimes get Segfaults on my app when the cache memory is high but I attribute this to thread limits more than memory usage.

Speaking of which, whenever I reboot my system the ulimit for stack size gets reset to 10240 and I have to manually change it back to 1024 to get the number of threads required. Where would I change this so it would be a permanent change?

User avatar
Void Main
Site Admin
Site Admin
Posts: 5712
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Wed Oct 12, 2005 11:45 am

If you want the limit changed just for your user you would add the ulimit statement to your ~/.bash_profile. For all users you could put it in /etc/profile. Did you ever get a core dump and try and debug the problem like I asked in the other thread?

ZiaTioN
administrator
administrator
Posts: 460
Joined: Tue Apr 08, 2003 3:28 pm
Contact:

Post by ZiaTioN » Wed Oct 12, 2005 10:53 pm

yeah but it was like 800 MB or something crazy like that. I ran it through strings but was unable to decipher anything usable in it. I know it has to do with a limit in either threads or sockets. I am more willing to lean towards threads though because when I decrease the stack size to allow for more threads it does not happen any more.

I currently have 563 servers I connect to. For each server I create a thread. And for each thread I create three socket connections. One to the server itself and two to my MySQL database. So for 563 servers I am looking at a total of 563 threads, and 1609 socket connections.

However I split this load in half and only process half at a time so then I am only dealing with 845 sockets and 282 threads at a time. What is killing me is that each thread creates a few quiet extensive hash tables (well over 1,000 keys in some cases) and since perls hash tables are little memory hogging CeNsOrEd I am sacrificing system resources for speed of execution. I really do not have an option though because I need the hash tables constant-time O(1) lookup. If I were to use some sort of array option instead I would be swapping memory hogging for cpu killing!! Plus my app would run about 100,000 time slower. I could use associative arrays but those are basically hash tables anyway so why not use hashes.

I currently have a gig of DDR memory and it seems to be doing the trick, I just have to watch the limits and try and change them to be what they are now upon system reboot. I also just bought another gig of PC2700 memory the other day with a new mobo and a AMD 64 bit 90nm rev e 3500 processor so these should help some. I can not run my current gig with the new gig though since the current stuff is only PC2100. Eventually I would like to run 2 gigs of PC3200 or more. Not sure if I want to get into DDR2 just yet.

User avatar
Void Main
Site Admin
Site Admin
Posts: 5712
Joined: Wed Jan 08, 2003 5:24 am
Location: Tuxville, USA
Contact:

Post by Void Main » Thu Oct 13, 2005 7:43 am

The core dump would contain an image of the allocated memory which I am sure is the bulk of it. The most interesting parts for me are in the very beginning of the image. Sounds like you would have to run it through gdb though to get anything useful. You can also set system wide memory limits in /etc/security/limits.conf and /etc/sysctl.conf.

Post Reply