I've been using SSDs for a couple years now. Hands down the biggest improvement in performance from an upgrade that I can remember in the last 20 years...
If you can deal with keeping your OS and Apps on a different drive than your data you can pick up a small SSD for very cheap (<$100) compared to just a couple years ago (the first one I bought was almost $900).
Will adding a SSD to a 7 year old computer make it appropriate for modern computing tasks, not really for most folks... Even the OS wants hardware graphics acceleration these days. But, if you know the limitations, and plan to stick with 2D apps that need little RAM/CPU horsepower it will help. And of course, it's an easily portable upgrade. If you buy a new machine next year you could move it over (though with the price curve it may not be worthwhile with newer, bigger, faster, cheaper drives coming out every day...)
When it comes to mounting you can pick up an externally accessible solution for under $20. I use 5-in-3's for all of my data drives (the ones I like are a bit more expensive), and 2-in-1's like this http://www.newegg.com/Product/Produc...-014-_-Product for system SSD's. There are cheaper (~$3) internal mounting solutions, but if you work on your systems with any frequency external access is great.
I've been using SSDs for a couple years now. Hands down the biggest improvement in performance from an upgrade that I can remember in the last 20 years...
An SSD is, at best, a bandaid. If you want to dramatically improve speed, run your OS from a RAMdisk - much, much faster than an SSD. ALternatively, increase the size of the disk cache - both will give more speed increase than an SSD, for free.
Here is the problem: The time it takes a CPU to execute one instruction is now less than the time it takes to extract that instruction from the memory or disk. To overcome that issue, cache - special very high speed memory located close to the CPU - is added. The system "infers" what data is needed in the cache and loads it while the CPU is doing other things.
For some kinds of operations, the system cannot predict well and has to load cache from disk, which is very slow. So, if one is running poorly written programs, there will be a lot of disk accesses and associated slowness.
Faster second level cache is one way to compensate for these poorly written programs. One way is the SSD, but a better way is the RAMdisk or in-memory disk cache.
Unforunately, it doesn't really matter - the real, average speed among PCs manufactured in the last year or so has reached the physical limit of current approaches, and there is no solution in sight.
--------------------------------------------------
Electrical Engineer by day, Woodworker by night
Picking up on something said earlier - dealing with having your OS and apps on a different drive than your data ---> the only problem I seem to be having in doing this so far is with programs that automatically decide to default to windows conventional folders, like my pics, my docs, etc. Especially with my docs folder, is there a way to reassign it to the data drive?
hijack over
All of those "special" folders are defined in the Windows Registry. You can either manually edit it - if you know what you're doing - or use a program designed to tweak those special Registry settings safely.
Registry location:
"HKEY_CURRENT_USER\Software\Microsoft\Windows\Curr entVersion\Explorer\Shell Folders"
(I don't know why BT3central splits "CurrentVersion" - it should be one word. I can't edit it back together for some reason)
The "TweakUI Powertoy" tool from Microsoft is a good tool to access useful registry settings. One of the settings is the "Places Bar" - when a program uses the regular Windows Open/Close dialog boxes (where you enter file names & pick folders) you'll notice a few pre-defined buttons on the left for common places. TweakUI lets you customize these in its "Common Dialogs" menu item. The "My Computer" "Special Folders" menu items edit the items you asked about.
Download the TweakUI version specific to your Windows version: XP, Vista, Windows 7, etc. And specific to the 32 or 64 bit versions sometimes.
Also, in most versions of Windows, if you use regular Windows Explorer and go to the folder itself (not IN the folder - got to the parent WITH the folder) and then right-click drag the folder to some other location, you'll move the folder and its contents to wherever you want.
An SSD is, at best, a bandaid. If you want to dramatically improve speed, run your OS from a RAMdisk - much, much faster than an SSD. ALternatively, increase the size of the disk cache - both will give more speed increase than an SSD, for free.
That was a great approach when running DOS a decade ago, not such a useful approach with Win7. Loading several GB of data into a RAMDrive would significantly increase boot times and still doesn't address the read speed issues for all of your apps...
Which flavor SSD are you running? If it's an older model with no TRIM or manual clean-up support you may not be seeing much of an improvement once the drive has been filled. Otherwise I cannot believe you don't recognize the HUGE performance/dollar ratio improvement relative to CPU, GPU, MOBO or RAM upgrades.
Originally posted by woodturner
Here is the problem: The time it takes a CPU to execute one instruction is now less than the time it takes to extract that instruction from the memory or disk. To overcome that issue, cache - special very high speed memory located close to the CPU - is added. The system "infers" what data is needed in the cache and loads it while the CPU is doing other things.
For some kinds of operations, the system cannot predict well and has to load cache from disk, which is very slow. So, if one is running poorly written programs, there will be a lot of disk accesses and associated slowness.
CPU operation has been asynchronous from memory operation for years. L1/L2/L3 cache is small (a few MB) and is done to address the "slow" speed of the system RAM not disk access. Hard drive's are a few additional orders of magnitude slower than RAM and a few MB of CPU cache wouldn't be able to store the content of most of the files where slow read speeds are an issue anyhow...
Picking up on something said earlier - dealing with having your OS and apps on a different drive than your data ---> the only problem I seem to be having in doing this so far is with programs that automatically decide to default to windows conventional folders, like my pics, my docs, etc. Especially with my docs folder, is there a way to reassign it to the data drive?
hijack over
You can easily move My Docs, at least in XP.
First Back up you data!
Right-click My Computer
Right-click My Documents and select Properties
Click the Move button and select the new location.
Wait
Loading several GB of data into a RAMDrive would significantly increase boot times and still doesn't address the read speed issues for all of your apps...
It does take longer on startup, but RAM is much faster than SSD, so it will outperform an SSD, assuming the app is written well enough that the cache prediction is working reasonably well.
Otherwise I cannot believe you don't recognize the HUGE performance/dollar ratio improvement relative to CPU, GPU, MOBO or RAM upgrades.
Depends significantly on the app - and RAM used as cache is faster. Agree that CPU, MOBO, and just a system RAM upgrade will have minimal impact. If the app is written to use the GPU, though, the GPU can provide a huge speedup.
CPU operation has been asynchronous from memory operation for years. L1/L2/L3 cache is small (a few MB) and is done to address the "slow" speed of the system RAM not disk access. Hard drive's are a few additional orders of magnitude slower than RAM and a few MB of CPU cache wouldn't be able to store the content of most of the files where slow read speeds are an issue anyhow...
Sort of. CPU execution is not literally asynchronous with respect to memory, though that is one conceptual model that is used. L1 and L2 cache are typically on-chip cache and use primarily locality of reference to speed up execution. L3 cache is the disk cache and is used to mitigate the huge latency of disk. Disk read speeds approach RAM transfer rates - it's the latency that causes the problem. SSD transfers at close to the rates of HD, it just reduces the latency. So, if you have many smaller files, SSD will usually be faster. If you have one huge file, SSD will be about the same or maybe a little slower than HD. In other words, if you can reduce the latency overhead, HD is fine.
Better still is to use a disk cache in RAM - commonly called a ramdisk or paging buffer. Windows estimates the optimal size of this buffer, based on it's ideas of what it should be. If one has 4 GB of RAM, though, increasing the size of the buffer will provide 10x to 100x speedup over an SSD disk - again, assuming the application is well enough written to allow the cache prediction to work well.
Bottom line, it depends on what applications you are running. For badly written applications that use huge blocks of spaghetti code, SSD is likely to make a significant difference. Applications that use huge random data files will likely also benefit from SSD. Many windows applications will benefit more from increasing the disk buffer size.
The primary reality, though, is none of this makes any difference for computationally limited applications. Computers have reached the limit of their speed potential with current architectures, due to physical constraints. More cores is not the answer, because there is currently no way to efficiently use them. That's why benchmarks show, for example, that an i7 running a single application is slower than a two year old uniprocessor running the same application.
Current estimates suggest it will be another 10 years or so before we see more than a handful of cores on a chip, and another 20 or more years before we find a general solution to the problem. For the next 20 years or so, the only real speedup we are likely to achieve is for specific classes of problems.
--------------------------------------------------
Electrical Engineer by day, Woodworker by night
Sort of. CPU execution is not literally asynchronous with respect to memory, though that is one conceptual model that is used. L1 and L2 cache are typically on-chip cache and use primarily locality of reference to speed up execution. L3 cache is the disk cache and is used to mitigate the huge latency of disk.
Uh, they are literally aysnchronous in that they operate at different speeds. This isn't a huge issue because it often takes several cycles for a single memory operation read/write to complete so the clock difference mismatch is a second order performance impact.
In modern computers (Core i series, Phenom II, and in server class machines like the Xeons back half a decade, and the Itaniums before that...)the L3 cache is also on-die and again is small (16MB or smaller) and used primarily to speed access to data in memory.
Drive caching is also a good thing, no doubt.
Originally posted by woodturner
Disk read speeds approach RAM transfer rates - it's the latency that causes the problem. SSD transfers at close to the rates of HD, it just reduces the latency. So, if you have many smaller files, SSD will usually be faster. If you have one huge file, SSD will be about the same or maybe a little slower than HD. In other words, if you can reduce the latency overhead, HD is fine.
Disk read speeds approach RAM transfer rates? True, in the sense that I ride my bicycle at speeds that approach a jet airplane... High performance RAM has a peak transfer rate of about ~18GByte/s right now. Most consumer drives are still using 3GBit/s SATA interfaces, a difference of 50:1, use a 6GBit SATA and it's running at 4% the throughput of RAM.
Of course an SSD hits the same interface bottle neck so that's not where the performance difference comes into play. You're spot on that on larger files the performance difference is relatively smaller. This means a ho-hum SSD will only outperform a high performance velociraptor by about %100 (250MB/s vs 120MB/s) on large sequential files (However these are not the ones that make your PC feel sluggish). Compared to an average laptop drive double that improvement.
But now let's talk about the real benefit; small, non-sequential files. These are the vast majority of files on individual's PCs, and the ones that account for the lion's share of delays. Again our VelociRaptor (no slouch in the consumer/enthusiast moving parts realm) will perform a random 4K read at something around 1MB/s. A really bad SSD will do over 20MB/s, and the good ones will be over 60MB/s so you're looking at a %2000-%6000 performance increase. To be fair, that's a worst case scenario so you don't see real world improvements of %6000, but you do see very substantial, %1000+ improvements.
I will state it again. This is the biggest real-world computer performance increase I've seen in many years. Probably the biggest since going from tape or floppy to a hard drive.
Originally posted by woodturner
The primary reality, though, is none of this makes any difference for computationally limited applications. Computers have reached the limit of their speed potential with current architectures, due to physical constraints. More cores is not the answer, because there is currently no way to efficiently use them. That's why benchmarks show, for example, that an i7 running a single application is slower than a two year old uniprocessor running the same application.
Current estimates suggest it will be another 10 years or so before we see more than a handful of cores on a chip, and another 20 or more years before we find a general solution to the problem. For the next 20 years or so, the only real speedup we are likely to achieve is for specific classes of problems.
Well, yes and no...
First, most users aren't CPU bound very often. Clearly nowhere near as often as they are by HDD speed. And, CPUs are no longer doubling every few years, true enough.
However, when it comes to CPU heavy tasks often times these are exactly the ones most able to benefit from having multiple cores. In other cases new instruction sets can allow the CPU to do much more in the same amount of time. Try doing AES encryption, or video compression on an i7 and you'll see how two parts running at the same clock rate can have wildly different performance. Oh, and the new CPU will draw less power and throw less heat while doing it...
Uh, they are literally aysnchronous in that they operate at different speeds.
"Asynchronous" means that there is no correllation between the clocks. Put another way, there is no fixed relationship between the clocks, so that one "drifts" relative to the other. In PCs, all of the clocks are derived from a single source and we go to great lengths to keep them in phase and compensate for transit delay on the clocks. Thus, all the clocks in a PC are synchronous and, as a result, memory and CPU execution are synchronous.
This isn't a huge issue because it often takes several cycles for a single memory operation read/write to complete so the clock difference mismatch is a second order performance impact.
It's important to differentiate between latency and bandwidth. For example, on a DDR3 memory, latency is in the range of 10 ns, while a single core CPU can execute some instructions in 333 ps. However, data transfer rates for DDR3 are 17 GB/s second or 235 ps (assuming 32 bit words) - about 30% faster than the fastest execution of the CPU. So, memory bandwidth is sufficient, that is the data can be transferred at CPU speed, but the latency, the time to find the data and start a burst, is around 30 times the fastest CPU rate.
In modern computers (Core i series, Phenom II, and in server class machines like the Xeons back half a decade, and the Itaniums before that...)the L3 cache is also on-die and again is small (16MB or smaller) and used primarily to speed access to data in memory.
L3 cache is, by definition and as that term is used by computer architects, external to the CPU. On multi-cores, the L3 cache may be on-chip, but is still external to each core. The purpose of L3 cache is to store more data than will fit in L1 and L2. In practice, it ends up serving as a disk cache but also averages out the latency of system memory as you noted.
Sata III transfers at 6 Gb/s, or 750 MB/s. Parallel interfaces achieve 48 GB/s or more, compared to 17 GB/s for DDR3 RAM.
Of course an SSD hits the same interface bottle neck so that's not where the performance difference comes into play.
Agreed - SSD achieves transfer rates of 3 Gb/s. However, as I said earlier, the latency is lower - it can be as long as 2 ms for a rotational drive vs. 0.5 ms for some SSDs. Thus, an SSD can start sending data 5 to 10 times "sooner" than a rotational drive.
But now let's talk about the real benefit; small, non-sequential files. These are the vast majority of files on individual's PCs, and the ones that account for the lion's share of delays.
I'm not convinced that those files do account for the majority, based on the research. If they are "small", they should be cached and run from RAM, so the type of drive should make no difference.
Again our VelociRaptor (no slouch in the consumer/enthusiast moving parts realm) will perform a random 4K read at something around 1MB/s. A really bad SSD will do over 20MB/s, and the good ones will be over 60MB/s so you're looking at a %2000-%6000 performance increase.
I assume you are including latency in those numbers? As mentioned earlier, to be accurate we need to separate latency and bandwidth.
This is the biggest real-world computer performance increase I've seen in many years.
No disagreement that for poorly written applications there will be some benefit to SSDs over HDs, but the point is that caching in system RAM will outperform any disk option - SSD or HD.
First, most users aren't CPU bound very often.
Research suggests that around 68.7% of applications are CPU bound. In fairness, the popularity of gaming (which is almost always CPU bound) skews those results. I agree that the typical non-game playing home computer user is often not CPU bound, unless they are editing photos and that kind of thing.
Clearly nowhere near as often as they are by HDD speed.
Again, statistically, for Windows PC users, it's the overhead of Windows that is the primary bottleneck. That's one reason why Linux users rarely complain about disk access time (but Linux also does a better job of managing the L3 cache).
However, when it comes to CPU heavy tasks often times these are exactly the ones most able to benefit from having multiple cores.
There are two primary problems with multicore processors:
1. The clock rates have to be reduced due to power dissipation issues - so each processor runs slower than a unicore processor.
2. The programming techniques to effectively use multicore processors have not yet been developed. The only way to use them currently is to "hand code" for them. So, if there is enough funding to support it, a particular application can be rewritten with very extensive effort to exploit the parallelism in the application - but only if it has parallelism. Many applications do not have exploitable parallelism. It's a huge obstacle, sometimes referred to as the "programming wall". Here is a link to a recent Patterson article that further explains these issues: http://spectrum.ieee.org/computing/s...with-multicore
Try doing AES encryption, or video compression on an i7 and you'll see how two parts running at the same clock rate can have wildly different performance.
In our recent research, we tested various encryption methods, including AES, multiple video compression methods, multiple types of DSP transformations, and several biometric methods. As expected, the i7 with it's lower clock rate per core had lower bandwidth (i.e. ran slower) than a uniprocessor, even when the uniprocessor clock rate was lowered to match the i7. When the code was manually rewritten to exploit parallelism and use multiple cores, we were able to achieve a speedup as high as 9.45% for some applications. For such a small speedup, it's rarely worth the effort.
FWIW, our current research suggests a speedup of 100 times can be achieved for certain classes of problems. It will probably be two years or so before that work is published in the journals.
You might find that trying to make new hardware work with obsolete software a real problem. I spent 3 frustrating days trying to load XP on a new system and failed. I ended up installing Windows 7. I like Windows 7 quite a bit although it has some annoying quirks.
I've read mixed reviews about the SSD's on newegg.com. It seems like a good idea but lots of people aren't seeing the huge speed increases.
Comment