Having free memory does not make a system run faster. A system will run the fastest if all real memory is in use. Let me explain.. Say you have 1GB of RAM and 2GB of swap. Say you have a set of services (httpd/ftpd/sshd/etc) that run on your system that in total initially allocate 200MB of RAM. When you first boot your system you would probably see the "MemFree:" at around 800MB. Of course when these services open/read/write/close files they allocate more memory to work with the data. After a typical program is finished working with this newly allocated area it unallocates it so some other program (or the system) can use that area for something else. In Linux this will not get put back into the "MemFree" area but it will be cached so that if that program (or another program) wants to access that same file/data then it doesn't have to go back to the slow disk drive to get it, it can pull it straight out of fast RAM. That's why on a good running Linux system you will usually see very little in the "MemFree:" section of /proc/meminfo, you will also see very little swap used, and a good chunk in the "Cached" (and related) columns.
What kills Linux and UNIX systems is when programs have to allocate more RAM than you actually have and have to page memory in and out to swap in order to do their work. It's that paging (swapping) that will bring your system to a crawl because you are using slow disk in place of fast real memory. Now, one time swaps also usually aren't a big deal, it's when you get into "thrashing" that it becomes painful. This is where you will see continual paging going on. You need to use tools like "iostat" and "vmstat" (see si and so columns in vmstat). For instance:
# vmstat 5
You want to see "0" in the si and so columns. The bigger the numbers get the more swapping is going on. Of course memory is not the only bottleneck that can cause a system to be slow. Disk I/O, disk bandwidth, properly partitioned for speed. CPU speed and number of CPUs, poorly tuned applications, network interfaces, internal/external network issues, etc. This can all be figured out using the available system tools like top, vmstat, iostat, free, sar, netstat, snmp graphing tools, etc. I personally like to graph all of these variables so I can spot trends which makes it easier to pinpoint a problem when it occurs. Here is an example of a custom graph that I create:
http://voidmain.is-a-geek.net/i/memgraph.png
In the above graph I have it set up so everything less than 0 is swap and everything greater than 0 is real memory. The black lines are the total (total swap space and total real memory). The white area just below the top black line represents what programs have allocated the white area just above the bottom black line represents swap in use. The very light green just above 0 represents "MemFree:" from /proc/meminfo. This particular server was rebooted near the beginning of the graph (left side) and you can see right after it came up there was the most light green MemFree. As the system was used you can see the MemFree slowly is replaced by Cached but the amount of memory that programs needed stays constant throughout the graph. Toward the right side you can see some swap is being used but it's not a state where the system in constantly swapping in and out (you can't really see that on a memory graph like this and have to use a tool like vmstat to see).
To be honest I can't really explain why any swap at all was allocated when there is plenty of cached memory available that could have been given up rather than swapping. The only thing I can think of is that the system detected areas of allocated memory that never see any action and swapped them to disk so more memory would be available for cache, something that does get used. That is just speculation on my part though.