TideLog Archive for the “PC Repair” Category
I’ve had this problem a few times on my laptop. It occurs mostly when the power suddenly goes off and it switches to battery. You lose all capacity monitoring, and can’t tell how much is left. The system tray icon changes to this:
Microsoft’s forums are hilarious. Their “Most Valuable Professionals” give the funniest canned cut ‘n’ paste responses, from, “Your power driver is corrupt” to your “Windows needs reinstalling!”. I know exactly what causes it, and it ain’t anything to do with “power drivers” or corrupt Windows. It’s the little monitoring chip in the battery. Like a lot of integrated electronics, it sometimes gets confused. Sudden switchovers from mains to battery tend to cause it, especially if there’s any surges from the battery as it kicks in.
The age old advice of “Reboot!” is the wise advice. If that doesn’t cure it, turn your machine off, remove the mains and battery, and hold your power button down to discharge the circuitry in your device (apart from the RTC circuit, but this doesn’t matter), that should cure it. Removing the battery opens the circuit to the sensing system in the battery, and resets it.
Simples. I hate MVP’s, they go on a 5 day course and think that gives them a Professional title? I’ve done MVP courses, but have the skills and years of software and electrical experience to further and back them up
No Comments »
When a hard disk is manufactured, there are areas on the platter that have bad sectors. Considering that on a 2 TB hard disk there are 4 billion sectors, then a few bad sectors is only a tiny proportion of the total number of sectors on the drive. During the test phases of a hard disk, the platters are scanned at the factory and the bad sectors are mapped out – these are generally called ‘Primary Defects’. The primary defects are stored in tables in the firmware zone, or in some cases the ROM of a hard disk. When you buy a brand new hard disk, you will most likely be completely unaware of these bad sectors and the numbers because they are ‘mapped out’ using ‘translator‘ algorithms.
Modern hard disks use Logical Block Addressing or LBA, this describes the sector numbering system on the hard disk, and goes in sequence
0,1,2,3,4,5,…..n-1,n (where n is the last sector on the drive.
Spare sector pools
All modern hard disk drives have a spare sector pool. This is used when bad sectors develop during the normal life of the hard disk and any newly found bad sectors are ‘replaced’ with good ones from the spare sector pool. This process is invisible to the user and they will probably never know that anything has changed.
How Bad Sector Mapping Works:
There are at least two methods of bad sector re-mapping (or translation) these are P-List and G-List.
- P-list are defects found during manufacture and are also know as Primary Defects
- G-List are defects that develop in normal use of the drive and are known as Grown Defects
There are other defect lists found in modern drives but the principles are similar. For example, you may find a T-List or a Track defect list, or an S-List or System area defect list.
Lets get into how these defect lists actually work, so let’s say we have a small hard disk with 100 sectors and a 10 sector spares pool.
When bad sectors are found at the factory, shift-points are entered into the P-List, if we take the following LBA sequence 0,1,2,3,4,5,6,7,8,9,10 …99, 100 Lets say that Sectors 3, 6 and 9 are found to be bad. When the first bad sector is found, the first part of the re-mapping process will look like this
What happens here is the bad sector at position 3 is recorded in the P-List. The new map now looks like this;
0,1,2,P,3,4,5,6,7,8,9,10 .. You can see now that 3 is where 4 was.
The next bad sector at LBA 6 is now found
0,1,2,P,3,4,5,B,7 and is again mapped out giving 0,1,2,P,3,4,5,P,6,7
When the whole sequence is complete, our final map looks like this.
Because these sectors are mapped out, the user will never be aware that they exist. If you want to look at sector 6, the drive will translate that to physical sector 8. It takes the 6 and adds the shift points to it, +1 for the bad sector at LBA3 and +1 for the bad sector at LBA 6. When the testing gets to the end of the drive, in order that it is of the correct size of 100 sectors, it allocates the sectors from the spare sector pool completely concealing the fact that there are bad sectors on the media. To all intents and purposes the drive looks just like the original as 1,2,3,4,5,6,7,8,9,10. However, our spare pool has reduced in size and there are now 7 sectors remaining in the spares pool.
After using the drive for a while some bad sectors develop the drive takes care of these using a grown defect list.
The grown defect list or G-List is a table containing the location of bad sector defects found during normal operation of the hard disk drive. When a bad sector occurs during normal use of the drive, something a similar process to P-List generation occurs – resulting with the bad sectors being mapped out. The process for G-List mapping out is slightly different. Lets say our hard disk develops a bad sector at the current LBA 6. What happens in this case is first the bad sector is mapped out. Giving; 0,1,2,3,4,5,G,7,8,9,10 .. A sector from the spare pool is allocated in the bad sectors place. We used 3 of these sectors in factory testing, so the next available bad sector is 104 this now becomes mapped to LBA 6 so our sequence would look like this; 0,1,2,3,4,5,104,7,8,9,10
Again, this process is completely invisible to the user and will still look like the original sequence of 0,1,2,3,4,5,6,7,8,9,10
You might ask, ‘why don’t the new defects get added to the P-List?‘ the answer is that if you add a grown defect to the P-List it has the effect of shifting the data up the drive for each sector from the point where the new bad sector is found. If you look again at the methodology behind the P-List it will help you understand this.
Where a G-List entry can help to revive hard disk, if there was data stored in the original sector attempts then usually it is lost. This may appear to the user as a file that not longer opens, or a a program that doesn’t run anymore or some other errant behaviour. This will not become apparent until the next time the file is attempted to be opened. It may also be that it is such a long time since it was opened that a backup plan means there are no backups of the working version. So bear this in mind when developing you backup plan.
Defect Mapping in a live system
When a hard disk is powered up, the p-list and g-list are usually loaded into RAM on the controller card. As requests for data come through, the location where the data is required from is passed to the translator, which makes the calculations necessary so as to determine which sectors to actually read in order to get to the actual data requested. In our example above, if we wanted the data from LBA 6 the translator would first run through the p-list and add 2 sectors to the count for the two bad sectors found at the factory, it then checks this value in the G-list and finds it has been re-allocated to sector 104. It then reads sector 104 and presents you with the data.
All the magic that goes unnoticed by normal people 🙂
No Comments »
Many electronics manufacturers, including HDD manufacturers like Seagate, have been using the industry standard “Mean Time Between Failures” (MTBF) to quantify disk drive average failure rates. MTBF has proven useful in the past, but it is flawed.
To address issues of reliability, Seagate is changing to another standard: “Annualized Failure Rate” (AFR).
MTBF is a statistical term relating to reliability as expressed in power on hours (p.o.h.) and is often a specification associated with hard drive mechanisms.
It was originally developed for the military and can be calculated several different ways, each yielding substantially different results. It is common to see MTBF ratings between 300,000 to 1,200,000 hours for hard disk drive mechanisms, which might lead one to conclude that the specification promises between 30 and 120 years of continuous operation. This is not the case! The specification is based on a large (statistically significant) number of drives running continuously at a test site, with data extrapolated according to various known statistical models to yield the results.
Based on the observed error rate over a few weeks or months, the MTBF is estimated and not representative of how long your individual drive, or any individual product, is likely to last. Nor is the MTBF a warranty – it is representative of the relative reliability of a family of products. A higher MTBF merely suggests a generally more reliable and robust family of mechanisms (depending upon the consistency of the statistical models used). Historically, the field MTBF, which includes all returns regardless of cause, is typically 50-60% of projected MTBF.
Seagate’s new standard is AFR. AFR is similar to MTBF and differs only in units. While MTBF is the probable average number of service hours between failures, AFR is the probable percent of failures per year, based on the manufacturer’s total number of installed units of similar type. AFR is an estimate of the percentage of products that will fail in the field due to a supplier cause in one year. Seagate has transitioned from average measures to percentage measures.
MTBF quantifies the probability of failure for a product, however, when a product is first introduced: this rate is often a predicted number, and only after a substantial amount of testing or extensive use in the field can a manufacturer provide demonstrated or actual MTBF measurements. AFR will better allow service plans and spare unit strategies to be set.
Hard drive reliability is closely related to temperature. By operational design, the ambient temperature is 86°F. Temperatures above 122°F or below 41°F, decrease reliability. Directed airflow up to 150 linear feet/min. is recommended for high speed drives.
The failure rate does not include drive returns with “no trouble found”, excessive shock failure, or handling damage.
Here is an example excerpt from a Product Manual, in this case for the Barracuda ES.2 Near-Line Serial ATA drive, which we installed in a backup server at Kana’s datacentre:
The product shall achieve an Annualized Failure Rate – AFR – of 0.73% (Mean Time Between Failures – MTBF – of 1.2 Million hrs) when operated in an environment that ensures the HDA case temperatures do not exceed 40°C. Operation at case temperatures outside the specifications in Section 2.9 may increase the product Annualized Failure Rate (decrease MTBF). AFR and MTBF are population statistics that are not relevant to individual units.
AFR and MTBF specifications are based on the following assumptions for business critical storage system environments:
- 8,760 power-on-hours per year.
- 250 average motor start/stop cycles per year.
- Operations at nominal voltages.
- Systems will provide adequate cooling to ensure the case temperatures do not exceed 40°C. Temperatures outside the specifications in Section 2.9 will increase the product AFR and decrease MTBF.
1.2 million hours MTBF? I’d have expected that kind of lifetime from an older hard drive, from when they were made to LAST, from the days of manufacturers like Connor and ExelStor, but you certainly won’t get THAT kind of running hours from a modern drive, certainly not 1.2 million hours CONSTANT running!
No Comments »
Posted by Tidosho in Data Recovery, Electronics, Home Computing, PC Repair, Tech, Work, tags: bad sectors, hard bad sectors, Hard drive, preventing, soft bad sectors, what are bad sectors
Bad sectors are little clusters of data on your hard disk that cannot be read. More than that, though, they have the potential to cause real damage to your hard drive (catastrophic failure) if they build up over time, stressing your hard drive’s arm, which contains the read/write head, there are two for each platter, one for each side. Bad sectors are fairly common with normal computer use and the imperfections of the world we live in. Like chip fabrication and LCD panel manufacturing, HDD manufacture is a very critical, precise process, and like a TFT with bad pixels from the factory, you do get bad sectors with a HDD due to imperfections when it’s made. The manufacturers make legal allowances for a certain limit to these imperfections before warranty claims can be made, like the legal limit of 5 dead pixels on a TFT. However, there are several simple steps you can take to prevent HDD bad sectors and to repair any that you do have. Having bad sectors will slow down computer performance as well, as your drive takes time attempting to read them. Here is a step-by-step guide. The most common questions I get as a computer engineer are “What is a sector?”, and “How are HDD bad sectors created?”
A sector is simply a unit of information stored on your hard disk. Rather than being a mass of fluid information, your hard disk stores things neatly into “sectors”, a bit like us humans putting things into boxes, and the box only holds so much, and all boxes are the same size. The standard sector size is 512 bytes.
There are various problems that can cause HDD bad sectors:
- Improper shutdown of Windows, especially power loss while the HDD is writing data;
- Defects of the hard disk, including general surface wear, pollution of the air inside the unit due to a dirty or clogged air filter, or the head touching the surface of the disk;
- Other poor quality or aging hardware, including dodgy data cables, an overheated hard drive, and even a power supply problem, if your drive’s power is erratic;
Hard and soft bad sectors
There are two types of bad sectors – hard and soft.
Hard bad sectors are the ones that are physically damaged (that can happen because of a head crash if your drive is dropped while running and writing data), or in a fixed magnetic state. If your computer is bumped while the hard disk is writing data, is exposed to extreme heat, or simply has a faulty mechanical part that is allowing the head to contact the disk surface, a “hard bad sector” might be created. Hard bad sectors cannot be repaired, but they can be prevented. The heads of a hard drive float on the air cushion generated by the platters spinning, they fly less than the width of a human hair away from the platters, even a small speck of dust is like a mountain, so knocks are definitely to be avoided.
Soft bad sectors occur when an error correction code (ECC) found in the sector does not match the content of the sector. Whenever a file is written to a sector, the drive calculates a “checksum”, which is used to verify the data, if it doesn’t match upon read, the drive knows the sector is weak. A soft bad sector is sometimes explained as the “hard drive formatting wearing out”, in other words the magnetic field is weakening, like an old video cassette – they are logical errors, not physical damage ones. These are repairable by overwriting everything on the disk with zeros. Like tapes and CD’s, the magnetic surface on a hard disk is not infinite, it is affected by other magnetic fields around it, so data recovery guys like me recommend regularly imaging a drive directly to another, frequently, to keep the data fresh and readable.
Preventing bad sectors
You can help prevent bad sectors (always better than trying to repair them, as they say prevention is better than cure!) by paying attention to both the hardware and the software on your computer.
Preventing bad sectors caused by hardware:
- Make sure your computer is kept cool and dust free;
- Make sure you buy good quality hardware from respected brands. Cheap RAM and power supplies are my biggest culprits from experience;
- Always move your computer carefully, and make sure it is TURNED OFF, not in Sleep mode, it can wake up while being moved, especially a laptop;
- Keep your data cables as short as possible;
- Always shut down your computer correctly – use an uninterrupted power supply if your house is prone to blackouts.
Preventing bad sectors using software
- Use a quality disk defragmenter program with automated scheduling to help prevent head crashes (head crashes can create hard bad sectors). Disk defragmentation reduces hard drive wear and tear, thus prolonging its lifetime and preventing bad sectors;
- Run a quality anti-virus and anti-malware software and keep the programs updated.
Monitoring bad sectors
If you use a tool like HD Sentinel, or CrystalDiskInfo, and you notice bad sectors on your drive, keep an eye on it. A few sectors bad is not normally a problem, as I mentioned at the start of the article, up to 5 bad pixels on a new TFT is allowed before it becomes a warranty claim, hard drives are allowed a few bad sectors due to the imperfections of their manufacturing process. They are manufactured with what are known as “reserved sectors”, a spare area of the disk only accessible by the controller board. If a sector is weak, the controller will attempt to move the data to the reserved area, if this is successful it then attempts a quick read/write test on the old sector (takes less than a few milliseconds), if it fails it marks it as bad in the sector map, also stored in the drive reserved area, along with drive firmware, so that it doesn’t attempt to use it again.
If the number of bad sectors starts increasing, or you start to experience other symptoms, such as the drive dropping out completely as if you unplugged it, or any clicking, and data taking longer to read or copy, this could indicate a fault with the read/write heads, or the control circuitry. Stop using it immediately and back up any important data to another drive. If the failing drive is under warranty, print a log off from HD Sentinel and take it along with you to return the drive, as evidence.
S.M.A.R.T Values to look for
When looking at S.M.A.R.T (Smart Monitoring And Reporting Tool) analysis, the two main areas to look out for are:
Reallocated Sector Count
This shows how many of the drive’s Reserved sectors have been used. If too many of these are used it generally indicates a problem with the disk surface.
Current Pending Sector
This shows how many bad sectors are currently pending a rewrite. A hard drive will always try to rewrite the sector, if it fails, the sector is reallocated into the reserved, the drive adds the sector on to the Reallocated Sector Count, and the original sector is then marked as unusable. If the rewrite is successful, the Pending Sector count will drop.
No Comments »
Western Digital make really good hard drives, but where their Elements, Passport and MyBook drives are concerned, they’ve taken a wrong turn. The 2.5″ versions all have proprietary PCB’s on the drives themselves, so there’s no standard micro SATA data and power connectors like you’d expect. The USB connector and LED, plus the interface controller, are on the single board as well! This means you can’t just take the drive out and connect it to another USB to SATA enclosure.
A lot of very modern WD Elements, MyBook and Passport enclosures are now also encrypted, meaning the data can only be accessed when the control board is functioning correctly. In this article I’ll show you how to recover data from a WD Passport (laptop sized drive) enclosure, if the USB connector gets damaged.
1. Disassemble the enclosure, remove the drive, then remove the PCB from the bottom of the drive using a Torx screwdriver.
2. Flip the drive board over, you’ll see the following capacitors. Remove them using a soldering iron or a heatgun, being careful not to overheat or damage anything:
3. Next you need to take a standard SATA connector from another drive, or from a parts supplier (eBay has them in droves, search for COMAX SATA connector). Once you have it, take a look at it, you’ll see long pins and short pins. All the long ones are GROUND pins:
4. From the back side of the PCB (the componentless side which faces away from the drive when fitted), you will see pins E71, E72, E73 and E74, these belong to the SATA data pins. The other four pins marked with a red square belong to ground pins:
5. Now solder everything together, using this pinout:
E71 – Tx+
E72 – Tx-
E73 – Rx-
E74 – Rx+
The SATA standard uses two lines, a positive and negative, for Data TX (Transmit), and two for Data RX (Recieve), each having a separate ground on the ground lines. Use my picture below as a wiring reference:
Now all you need to do is use a standard USB cable to power the drive (if your connector is broken you can try soldering the power lines of a USB cable to the port power pins), connect via SATA to your PC, and it should work. NOTE: This WILL NOT work if your drive uses encryption, as that runs through the USB data lines, because we’re bypassing it, it won’t work.
You may get some “USB device not recognized” errors. Try connecting the SATA drive to a SATA hotplug port, connecting the data cable first, then the power, once Windows has started. Hotplug ports are usually purple or orange, it depends on the board manufacturer, Gigabytes are purple.
25 Comments »
Posted by Tidosho in Electronics, Hobbies, Lifestyle, PC Repair, Programming, Software, Software Bugs, Tech, Work, tags: clicking click beep, Firmware update, fix issue, Seagate ST9500420AS
As it is when shipped retail, this drive is terrible. The drive constantly tries to park its heads, lagging the system, even during copy and move operations. With it being a top end 7200RPM drive, this is unacceptable, especially if like my Clevo, it is installed in a gaming machine. It is the power saving “features” of the firmware that cause it. The drive also exhibits “beeping” symptoms where the voice coils of the arm recieve a high current to wake it up and seek track, the high current effectively turns the coil into a speaker, and it makes a beeping sound. The drive constantly seems to miss beats because of the parking issue, causing the arm to miss and have to be shocked back into place by the controller.
Using tools like HDDScan to disable the APM (Advanced Power Management) and AAM (Advanced Acoustics Management) features aren’t permanent, once the drives are power cycled the issue starts all over again. The drive refuses any permanent disable ATA control commands.
There is a Dell firmware that will get rid of the issue, and take the firmware up to 05SDM1. My Clevo’s laptop’s drive started out with 2SDM1 firmware. The new FW makes the drive visibly quicker. The auto flasher doesn’t work, instead we need to manually force it, I’ll show you how.
1. Download the Seagate Update Utility ISO image, hosted on TideLog, this very blog, by clicking HERE. Extract the ZIP file, you’ll find an ISO file called Seagate Utility.iso.
2. Burn the extracted ISO to a CD-RW or DVD-RW, and restart your computer. When your computer restarts, enter your BIOS and make sure the computer is set to boot from CD.
3. The updater will start on its own, but it will actually fail even though a green screen is shown, you will need to manually force it. It will dump you back at a command prompt, so type:
FDLH -m HOLLIDAY -f 0005SDM1.LOD -i ST9500420AS -b -v
Essentially this line forces the detection of Seagate ST9500420AS drives, and force flashes it, even if the BIOS doesn’t have the Dell asset tag embedded.
4. This works on any machine, including Dell Studio, Asus, my Clevo M571TU, and the M570. Any machine with a Seagate ST9500420AS drive should work fine. Any drives with “GAS” on the end are the same drive but with G-Shock protection.
No Comments »
HDD Regenerator is the backbone of our data recovery services at Kitamura, coupled with Runtime Software’s GetDataBack series, they are paid for software but pay for themselves if you’re a data recovery specialist.
One big warning I need to make people aware of about using HDD Regenerator 1.71 and lower with an Advanced Format drive that has 4k sectors is that they ARE NOT COMPATIBLE. Versions 1.71 and below DO NOT support 4k sectors on Advanced Format drives! You need to use HDD Regenerator 2011 and NEWER.
If you run these older versions either in DOS or Windows on an advanced drive formatted with 4k sectors, once it sees a bad sector it will regenerate it as a 512k sector, and every single sector after this will be seen as bad, making the drive look totally damaged. IMMEDIATELY STOP THE PROCESS, and purchase HDD Regenerator 2011, and re-run it either as a CD or USB. The newer version will correct the sectors to the proper size.
Don’t torrent it either. Dimitriy Primochenko and his team have done such a great job with this program since it started over 10 years ago, and any IT recovery professional worth his salt, like me, will reward a great team with purchases of their software. The price pays for itself, I use it in Kitamura and personally at home, so reward greatness with kindness and tip him an extra £10 on top of the purchase price.
No Comments »
I’ve never seen this before, and neither has Google! One of my company courtesy laptops has just had a new screen fitted due to blown backlight tube, but now Windows won’t start. The last time we used it was on an external screen to back up the customer docs to the root of C:\ and do updates. Now all I get is:
RQGEY is compressed
Press CTRL, ALT & DEL to restart
I’ve a feeling something is corrupted somewhere as that error (RQGEY) message is NOT a valid boot failure message, even Microsoft’s site returns no results. I was wiping it anyway, but was surprised at the strange variable. Even if the volume is compressed after installation, the critical components such as BOOTMGR, page file, and hibernate file are bypassed. I don’t think I’ll ever find out what RQGEY is supposed to be….
No Comments »
Posted by Tidosho in PC Repair, Work, tags: amateur, board swap, DCM, DCX, facts, fzabkar, HDDGuru, MDL, professional, Western Digital, WWN
As I mentioned in the last post, a lot of so called data recovery engineers like fzabkar on HDDGuru forums will claim that the DCM of a Western Digital drive, and the serial number must match as well as the model/firmware MDL number when doing a board swap. This is NOT true. Here I’ll show you just exactly what has to match on a Western Digital.
As an example I’ll note down the details of my company laptop’s new 500GB WD drive. On the label are the following sets of numbers:
MDL: WD5000BPVT – 00HXZT1 – This is the combined Model and Firmware numbers, this is the MOST IMPORTANT, these are really the ONLY set that MUST match.
S/N: WXM1xxxxED69 – This is the Serial number, which is UNIQUE TO EVERY drive, so you will NEVER EVER find a board with matching one. Every product in the world has its own personal unique number, just like humans have unique fingerprint patterns, so this is irrelevant if you’re seeking a donor board. I’ve replaced some characters in my serial here with x’es to anonymise it.
WWN:50014EE206116170 – This is a World Wide Name number which is the unique manufacturer identity.
DATE: 18 MAR 2012 R – This is the manufacture date. An R next to it means Recertified, it will also have Recertified written on the right of the big bold capacity marking on the top left of the sticker, as my drive is recertified. This DOES NOT have to match on a donor, the recertified status is irrelevant, it simply means that Western Digital have re-checked it as a customer return and recertified it as new, reprinting the label to show this.
DCM: EBOTJBB – This is the Drive Configuration Matrix string, which identifies the configuration of the drive, such as type of motor, number of platters, heads, and even the casing etc. This does NOT have to match, as a quick Google will often reveal different capacity drives with the same DCM. Mine for example is shared with a WD5000BMVW 500GB, and a WD3200BMVV 320GB, so these morons such as fzabkar on forums like HDDGuru are talking through their arses by saying they’re unique and that the ROM chip must be swapped. Most of the firmware and S.M.A.R.T data is stored on a reserved section of the platter, making their claims even more irrelevant.
DCX: TH16X3FZE – This is the drive’s Batch number, so that Western Digital know which factory it came from, what date it was made, and probably the engineer who soak tested it and which line it rolled off. This DOES NOT need to match for a donor board, it is IRRELEVANT.
To sum up, the only bits you really need to worry about is the Model/Firmware string. I have Western Digital drives as small as 10GB and as large as 2TB in my portfolio that I’ve used my own guidelines on over the years and I’ve never had any head/track/sector/cylinder issues, the drives have often worked better than they did with their original boards.
I’ve swapped 1,000 Western Digital drive boards over the past 6 years like this using my own guidelines, and each drive is perfect, even my own drives. I consider myself a data recovery professional, as I have performed many platter and arm swaps in a friend’s clean room as well as board swap and data recovery, especially on Western Digital drives, as I love them and have never used any other drives in my own computers.
fzabkar isn’t even a professional, as he uses phrases like “will most probably not work”, and “as far as I’m aware”, which sounds like he’s just using pure guesswork. I’m not, I’ve been doing this many years and have yet to have a WD not pass a board swap.
24 Comments »
Today a customer brought his Core 2 Duo desktop in, saying “It wouldn’t load Windows”, that was all he could tell me. Powering his box up, the hard drive was being recognised, but it wasn’t spinning, I didn’t even hear the “start-buzz” (the buzzing noise some drives make as the drive motor starts, the noise is the voice coils in the motor recieving the high start current), on a lot of desktop WD’s it’s like a “wuudearrkk” sound, this one was totally dead.
The arms on a drive will not unlock from the park clamp until the platters are spinning and the air vacuum inside the drive creates centrifugal inertia, along with the arm actuator coils activating, to gently release the heads together.
Upon removing the drive from the computer, and unscrewing the control board, it was apparent that the SMOOTH motor spindle starter & driver chip had failed catastrophically, along with the Q8 3.3v voltage regulator for the spindle driver, as my picture below illustrates. The thing that struck me on his system was that he was booting Windows from an old IDE drive (the WD2500), yet his data drive was a SATA Samsung! They should have been the other way round!
It is apparent that the chip has overheated and started to burn up, possibly because the drive either has failed bearings that caused high resistance, or it has simply shorted and died.
To fix this all you need to do is find a board with matching part number and firmware, such as WD2500KS-00MJB0. A lot of forum morons say that the DCM (Drive Configuration Matrix) also has to match otherwise you’ll need to desolder the ROM chip, THIS IS NOT TRUE. I have recovered at least 1,000 Western Digital drives in the last 6 years by board transplant, and we’re talking from old 20GB WD200’s up to new WD20EARS 2TB drives, and all I have ever done is match part number with firmware. The DCM is simply an architecture code, it is NOT a firmware date code. DCM’s are often identical on different capacity drives! The firmware revision is combined with the part number, like WD1200UE-00KVT0. The secondary code after the dash is the firmware.
Only these two have to match on a donor. If any of these so-called professionals on forums tell you otherwise, they obviously don’t know their job properly and are trying to overcharge you for unnecessary work.
2 Comments »
I’m in the middle of very slowly recovering data from Midori’s 250GB Fujitsu hard drive. Over the last 6 months the problems started out innocently, as if there was a bad sector. The drive would freeze with the HDD indicator solid, then after 2 mins it would recover. Running a full HDD Regenerator scan revealed no bad sectors, but still it happened, albeit not very often.
Then suddenly it has got worse, doing it every 2 minutes, blue screening the laptop. Putting the drive in my USB caddy the drive actually freezes and then completely disconnects itself from the USB bus if I attempt to write to it or cut and paste from it, as if I’ve used Safely Remove Hardware and turned it off. It will read data from itself, but very very weakly and slowly, starting at 800KB/sec, creeping up to a max of 8.56MB/sec after 5 minutes, when it should be at over 15MB/sec, normally starting out at 24MB easing to between 11 and 15MB.
It seems the MCU (Micro Control Unit, Main Control Unit, or Micro Code Unit) is failing, or there’s a voltage regulation issue between that and the heads. Modern hard drives only have three main chips as well as resistors, diodes and capacitors on their PCB”s. You have the MCU (main processor), the Cache memory chip, and the motor driver chip, often a SMOOTH chip, that spins the drive up using high current, then tapers off keeping it steady once it’s spun up to full revs. Remember when HDD’s had massive boards with lots of chips and electrical gubbins? That single MCU does the work of most of those, an awesome example of modern integrated electronics!
While we’re on the subject of Fujitsu, let me tell you a little bit of my data recovery background involving them…
Hard disk drive faults can occur for any number of reasons, sometimes wear and tear on the mechanical parts of the drive’s internals can lead to a drive failure, in other cases electronic faults on the drive’s PCB can lead to the failure of the drive, or even a mixture of both. Even a drive that is mechanically and electronically sound can fail, often leading to confusion in determining exactly what the cause of the failure is. The answer lies with the software that controls the hardware, that is stored on the platters, in the MCU, or both.
Quite a few years ago, when the data recovery industry was really taking off, new failures started cropping up, drives would spin up, make sounds as if initialising, and then…? …Nothing.
But what could be the cause? There was a very well known failure that appeared around the same era that data recovery companies started to appear like they were being mass produced from a factory! This failure was found in a popular brand of consumer desktop hard disk drives manufactured by (of all guys, Fujitsu!). These series all had model numbers beginning with either MPF or MPG. Before long the following drive models started failing, going into failure territory like no drive had been known to before:
These drives weren’t of the modern simple three chip design, they had big PCB’s with lots of circuitry. Once failed the Fujitsu hard disks behaved normally, spinning, apparently initializing, but not becoming ready. Whilst common in all drives of the above series the problem was particularly common in the MPG family, especially the 40GB and 20GB models, MPG3409AH, MPG3409AT, MPG3204AT, MPG3204AH.
To repair these drives access to the micro-program that starts and controls the drive (the firmware) was required. Once access to the drive had been gained via the manufacturer’s own unpublished ATA command-set the job of checking each of the firmware modules began. In most cases a temporary repair of the drive in order to extract a full clone onto a working device could be performed, involving repairing certain logs in the drive’s own firmware by replacing the contents with those from a known working drive of the same firmware revision. Results were often instant and long-lasting, but, once a drive had failed once there was only a finite length of time before it would fail again.
A good few years after the first problems Fujitsu finally admitted there were issue with the hard drives, cowardly blaming component manufacturers for the fault. The MPF and MPG series of drives showed excellent promise, with good performance, low price point and good build quality to boot, they should have really cemented Fujitsu’s foundations in the consumer desktop hard disk drive business, though it lead to Fujitsu calling it a day on further desktop hard drives instead concentrating on notebook and Enterprise class devices.
Even today they are still utter crap. They use the same Marvell processors that a lot of Samsung and Western Digital drives do, but WD and Sammy drives seem much more reliable. Samsung and WD boards that match failed ones are also easier to source as you don’t need to match serial numbers (embedded in ROM and on the platters on Fujitsu’s) because WD’s firmware and serial are just stored on the platters, as I think Samsung’s still are.
No Comments »
Posted by Tidosho in Console Repair, Consumer Electronic Repair, PC Repair, Security System Repair/Installation/Upgrade, Work, tags: cowboys, eBay, ebay seller servicemanualseu, extortionate, how to get for free, illegal selling, rip off, service manuals, Tradebit
I’m seeing a massive trend on the Internet, and I don’t like it. People are taking free service manuals available on the internet, collecting them, and then selling them on. I feel this is illegal, because:
- They don’t own the copyright to the service literature.
- They don’t have the rights, nor permission to SELL for profit, or a license from manufacturers.
- They often charge extortionate prices!
All for stuff that isn’t theirs! Tradebit, eBay, and all the other sites that charge on a per-manual basis, I don’t agree with, and detest them hugely. Add to that they often slap their own watermarks on, secure the documents with passwords (tampering with stolen goods) so that no-one can edit them. Service manuals are only public because they’re ILLEGALLY LEAKED, so all these arseholes are committing a criminal offence, by selling stuff that isn’t theirs.
Sites that offer unlimited downloads for a tiny monthly fee I agree with, as these aren’t extortionate, and they host the files on a server they pay for, so you’re not actually paying for the material, just the right to access the website. The ones I use even have permission, and pay royalties to manufacturers.
So, eBay sellers like “servicemanualseu”, and all those Tradebit cowboys selling a single 3 – 30MB file for $19.99, AVOID them. I often find that a quick Google reveals the stuff is available free elsewhere anyway. I’ve reported people like these to manufacturers, and a few have actually been disciplined, good riddance to ’em and all! These cowboys’ excuse is “we charge so much to stop DIY’ers”, but it isn’t your place or right of decision to say who can have them and not.
I have links with people in the electrical repair industry, being a qualified technician. I pay far less for a bunch of manuals from a manufacturer that wouldn’t even buy me a Tradebit cowboy’s single manual!
3 Comments »
I needed to do this recently because I had to clone a friend’s hard disk, both drives were SATA and my power supply only has one native SATA power connector. My optical drives are both SATA too, running off a Molex to 2xSATA adapter, but connecting 2 hard drives to one 12v plug causes power problems for the drives as they are high current, whereas optical drives aren’t.
So I’ve disconnected my optical drives to use just one of the 2 piggybacked SATA power leads. This left me with no drive to run Hiren’s disc from, so I decided to make a USB boot drive instead. here’s how I did it! This is also very useful if you’re on a laptop or desktop with a failed drive, simply use an ISO image instead of physical disc.
You will neeed:
- A copy of Hiren’s Boot CD 10.3 or newer in either disc or ISO format, you can extract the ISO or mount it using a virtual CD program such as MagicDisc (freeware, yay!)
- A USB pen drive of AT LEAST 1GB (gigabyte)
- A copy of USB Disk Storage Format, which I’ve hosted on TideLog, HERE.
- A copy of GRUB 4 DOS installer 1.1, also hosted on TideLog, HERE.
- A computer with CD drive (for copying the disc files if you have a Hiren CD)
- Your computer MUST be able to boot from USB if it is the one you are working with in recovery. Most computers about 3 or 4 years old will be fine.
How to perform:
1. Download, extract and run USB Disk Storage Format tool. Follow my steps in my screengrab below, in numbered order:
The reason we’re using this is it needs to be FAT. A lot of people have tried using exFAT in Vista and Win 7 but it won’t work and sometimes Win XP gets it wrong. I prefer third party apps to format external drives.
2. Run GRUB4DOS Installer, and follow the numbered steps as below. MAKE SURE you select the correct drive:
3. Insert your BootCD (10.3 or newer) in your CD Drive and copy everything from the CD to your USB Drive. If you have an ISO image, you can either extract all files from it, or mount it using a virtual CD program like MagicDisc mentioned in the requirements, then copying and pasting from the virtual CD as you would a physical disc.
4. Copy grldr and menu.lst from grub4dos.zip (or from HBCD folder) to the USB drive:
5. Finished! Restart the computer it is to be used on, and make sure the BIOS is set to boot from USB. Most BIOSes allow you to bring up a boot menu, by pressing F12 or similar, it will then autodetect bootable devices, select your USB drive and off it goes!
Configuring different BIOSes
To enter the BIOS press the “Del” key on your keyboard. Alternatives are “F1”, “F2”, “Insert”, and “F10”. Some PC BIOSes might even require a different key to be pressed. Commonly a PC will show a message like “Press [Del] to enter Setup” to indicate that you need to press the “Del” key. Some AMI BIOS require you to enable the option “USB Keyboard Legacy support”.
For AMI BIOS:
- Go to “Feature Setup”. “Enable” these options: “USB Function Support”, “USB Function For DOS” and “ThumbDrive for DOS”. Go to “Advanced Setup”. Set the “1st Boot Device” to “USB RMD-FDD”.
Reboot the PC and it now should boot from your USB drive.
- Go to “USB Mass Storage Device Configuration”. Select “Emulation Type”, and set it to “Harddisk”. Go to the “Boot Menu” and set the “1st boot device” to “USB-Stick“. Exit the BIOS, saving the changes. You can try setting “Emulation Type” to “Floppy” or “Forced FDD”.
For PHOENIX/AWARD BIOS:
- Go to “Advanced BIOS Features”. Go to the “1st Boot device” and set it to “USB-ZIP”.
No Comments »
I’ve rebuilt her with new base plastics, replaced the screen and hinges, and got a temporary palmrest whle I’m waiting on a decent one coming. She’s running brilliantly, even with Intel GMA X3100 graphics, HD videos at both 720p and 1080p are really smooth!
I’ve bought a genuine Toshiba batterysecond hand from a guy on eBay who said it was excellent. As I detailed in THIS POST, I ran the latest HWMonitor on it, and this machine has a battery sensor like my old Amilo in that old post. It is a battery that is factory new almost, take a look:
The capacity is still 100%, and CPUID have even added a Wear option for those that don’t understand the mWh ratings, it’s like a layman’s wear and tear status, even for a techie like me it’s great for at a glance status checking of your battery.
Download it, run it on your laptop, see what you get. Not all laptops have the motherboard battery capacity sensor pin connected though, there are several pins. As well as your usual positive and negative, you have pins for the following:
a) Pin(s) for basic capacity readings from the control IC in the battery. All laptops have these pins wired to the power management IC on the motherboard.
b) Pins that can read the wear and capacity status from the battery chip too, These are always present, but not soldered to anything on the motherboard on some laptops.
c) Pins that the battery control IC sends temperature signals through. The IC can also order a system shutdown if the battery runs too hot by sending a signal to the motherboard power control chip through these pins.
There will likely be support circuitry missing too, like resistors, capacitors and diodes if your system doesn’t support it, so it isn’t a case of joining the bridge, so to speak. If your laptop doesn’t support it, it’s likely the power control chip on your motherboard doesn’t have the feature anyway. Some ITE and Winbond chips don’t, but some do.
The reason a battery loses capacity isn’t always because the cells have worn out, but because the battery control chip is either defective, or out of sync with the cell(s) capacity. It is 99.9% of the time NEVER anything to do with the motherboard control system. All that does is feed the battery with power, this feed is adjusted based on the readings from the battery chip as it charges. The motherboard charging system then goes into trickle charge once the battery is full, this is actually what wears a battery down over time.
If you aren’t using the battery, you should discharge it first before storing it, you should never store a fully charged battery, the cells deteriorate. Don’t leave the battery on the laptop unnecessarily if you’re always on mains.
All I need now for my Toshy is a DVD fascia, DVD mounting bracket (yup, it’s missing, I just slotted a DVD-RW in for now) hard drive caddy, and a new screen surround and lid. They must have been damaged in the impact as the front bezel of the screen doesn’t clip where the hinges are and along the bottom of the screen, the wiring cover, the clips are all burst.
Apart from that, she’s a lovely runner, and is now with a much more caring and gentle owner. I’m surprised it survived that impact, as not a lot of impact damaged laptops I work on do.
No Comments »
Many people confuse the two, or think they’re the same. They’re not. I’m going to demystify the myth, so to speak!
Integrated graphics are a graphics system that is integrated into the Southbridge chipset of a system, like S3 Mirage and Intel GMA, so the chip controls both graphics and other subsystems. They do not have their own dedicated memory, instead they use system memory, which in my opinion is a good thing, because you won’t get expensive to repair soldered faulty graphics RAM. If the RAM does go bad, simply replace the module(s). The bad part is that the system RAM is often a lot slower than dedicated GDDR2/3/4 chips onboard, using system memory can also cause bottlenecks as the graphics memory bus is slowed down to the system memory bus speed.
Integrated chips (Intel’s are known as ICH (Integrated CHipset) or IGH (Integrated Graphics Hub), are not intended for gaming, but mainly light work. They can handle video (Intel’s GMA HD range can handle HD video) and possibly older games, like Doom 1 & 2, but nothing heavy.
Discreet graphics are a dedicated graphics chip soldered to the motherboard, like the nVidia GeForce G M chips. They have their own dedicated GDDR memory soldered to the board, running on its own super fast memory bus, so it is not limited to the slower system memory bus speed. Discreet graphics are usually always much much faster than integrated graphics, and are much better for gaming. They can be soldered to the main motherboard along with voltage regulators and video RAM, or they can be on a separate card.
Then of course after Integrated and Discreet systems you have full blown laptop PCI-E cards, which can either be full desktop class cards, or mobile versions. Some ultra high end Clevo Core i7 laptops even have both Discreet/Integrated and a PCI-E card, allowing you to switch between the two for different applications!
No Comments »