Showing posts with label Hardware. Show all posts
Showing posts with label Hardware. Show all posts

NEC Release 1st USB 3.0 Host Controller LSI

| 0 comments


NEC Electronics Corp will release the "μPD720200," a host controller LSI that supports USB 3.0. "We will be the first to release a USB 3.0 host controller in the world," said Shigeo Niitsu, associate vice president of the 2nd SoC Operations Unit. 

NEC Electronics is planning to start sample shipments in early June 2009 and volume production at one million units per month in September 2009. The price of a sample is ¥1,500 (approx US$15.6). It is expecting the SoC to be used in PCs first. 



"We have already entered specific discussions with PC manufacturers all over the world," Niitsu said. "The SoC will be in PCs from an early stage."

The company forecast that a PC equipped with USB 3.0 ports will emerge by the end of 2009 at the earliest.

The μPD720200 has two USB 3.0 ports, both of which support 5Gbps transmission, the highest data transmission speed of USB 3.0. Its power consumption is "less than 1W in operation," Niitsu said. The SoC is a 10 x 10mm 176-pin FBGA package.

NEC Electronics is expecting USB 3.0 to start to be featured in PCs in 2009 and to spread in earnest in and after 2010. USB 3.0-compatible PCs will account for about 30% of PCs in 2011 and 80% of PCs in 2012, the company forecast. The number of PCs that support USB 3.0 will reach approximately 140 million units in 2011 and 340 million units in 2012, it said.

Also, the company believes that USB 3.0 host controllers will be available in chipsets as well as standalone host controllers in 2011.

Excluding those to be mounted on chipsets, "We will be able to acquire almost 100% share" in the standalone host controller LSI market, Niitsu said. According to the company's estimates, PCs equipped with a USB 3.0 host controller LSI will reach 26 million units in 2010, 60 million units in 2011 and 90 million units in 2012.

Moreover, as a USB 3.0-related business, NEC Electronics is planning to set out an ASIC business for "device side" USB controllers such as USB-SATA bridge LSIs with fabless manufacturers. NEC Electronics will, for example, manufacture physical layer circuits and chips in the business, it said.

The company is planning μPD720200-related exhibitions and demonstrations at the SuperSpeed USB Developers Conference, which will take place in Tokyo May 20 and 21, 2009.

source info from Tech-on

Read More......

USB 3.0 at the Consumer Electronics Show (CES) in Las Vegas

| 0 comments

Intel demonstrated a working version of USB 3.0 at the Consumer Electronics Show (CES) in Las Vegas last week. Here's why it will make eSATA and FireWire obsolete.


When USB 3.0 is expected to hit the market in early 2010, it will have been 10 years since the now ubiquitous USB 2.0 was introduced (April 2000). The current USB 2.0 specification runs at a theoretical maximum speed of 480Mbps, and can supply power.

According to the USB Implementers Forum, there were 2 billion USB 2.0 devices shipped in 2006 (one for every three people in the world), and the install base was 6 billion (almost one for every person in the world). In November 2007, the USB Implementers forum announced the USB 3.0 specifications, and Intel officially demonstrated the technology at CES 2009.

Now, the juice: USB 3.0 promises a theoretical maximum rate of 5Gbps, meaning it's 10 times faster than USB 2.0. USB 3.0 is also full duplex, meaning it can upload and download simultaneously (it's bi-directional); USB 2.0 is only half duplex.


Put side by side with eSATA and FireWire 800, USB 3.0 is far superior. eSATA, an external connection that runs at the same speed as the internal SATA 1.0 bus, has a maximum theoretical of 3Gbps. This makes USB 3.0 faster than eSATA and about six times faster than FireWire 800 (full duplex at 800Mbps).

USB 3.0 also provides another advantage; while eSATA is faster than FireWire 800, unlike FireWire it cannot supply power. USB 3.0 has the advantage of being faster than both, even while supplying power.

Finally, USB 3.0 has improved power management, meaning that devices can move into idle, suspend and sleep states. This potentially means more battery life out of laptops and other battery-based USB-supporting devices like cameras and mobile phones.

Of course, there are other factors to consider; the FireWire 3200 standard is also in the works and promises to allow 3.2GHz speeds on existing FireWire 800 hardware. USB 2.0 generally doesn't meet its theoretical maximum throughput, due to its dependence on hardware and software configuration, where FireWire gets much closer.

It's hard to say whether USB 3.0's updated architecture will still use more CPU time than FireWire does.

But in the age of powerful hardware (can anyone say "3.2GHz, quad-core CPUs"?), all of this means that FireWire is still not going to match USB 3.0's theoretical maximum of 5Gbps.

The ultimate signal that this war has already been won is Apple's recent decision to ditch FireWire from its consumer line in favor of USB. Previously, Cupertino had been one of FireWire's greatest advocates. And surely the company will be one of the first to adopt USB 3.0.

All in all, we can't wait for motherboard manufacturers like Gigabyte and Asus to start supporting the technology and mainstream PC builders like Dell to start integrating it into their products. Bring on the speed.

Read More......

Fusion-io is New Fastest and Most Innovative SSD

| 0 comments

PCI Express, server-based solid-state storage offering sets a new standard for enterprise application-centric storage, with up to 640 gigabytes of capacity and 1.5 gigabytes per-second of sustained throughput

SALT LAKE CITY - March 11, 2009 - Fusion-io, the leader in solid-state architecture and high-performance I/O solutions, today announced the ioDrive Duo, which doubles the slot capacity of Fusion-io’s successful PCI Express-based ioDrive storage solution. The new ioDrive Duo is the market’s fastest and most innovative server-based solid-state storage solution. 

With the ioDrive Duo, it is now possible for application, database and system administrators to get previously unheard-of levels of performance, protection and capacity utilization from a single server. Performance for multiple ioDrive Duos scales linearly, allowing any enterprise to scale performance to six gigabytes per-second (Gbytes/sec) of read bandwidth and over 500,000 read IOPS by using just four ioDrive Duos.

“Many database and system administrators are finding that SANs are too expensive and don’t meet performance, protection and capacity utilization expectations,” said David Flynn, CTO of Fusion-io. “This is why more and more application vendors are moving toward application-centric solid-state storage. The ioDrive Duo offers the enterprise the advantages of application-centric storage without application-specific programming.”

ioDrive Duo Product Details

The following specifications describe the physical and performance characteristics of the ioDrive Duo.


PERFORMANCE
Based on PCI Express x8 or PCI Express 2.0 x4 standards, which can sustain up to 20 gigabits per-second (Gbytes/sec) of raw throughput, the ioDrive Duo has more than enough bandwidth to obtain industry-leading performance from a single card. The ioDrive Duo can easily sustain 1.5 Gbytes/sec of read bandwidth and nearly 200,000 read IOPS. Its performance metrics are as follows:

• Sustained read bandwidth: 1500 MB/sec (32k packet size)
• Sustained write bandwidth: 1400 MB/sec (32k packet size)
• Read IOPS: 186,000 (4k packet size)
• Write IOPS: 167,000 (4k packet size)
• Latency <>RELIABILITY
The ioDrive Duo offers unmatched solid-state protection for data integrity and reliability with triple redundancy for a single storage component.

• Multi-bit error detection and correction
• Patent-pending Flashback protection, offering chip-level N+1 redundancy and on-board self-healing so that no servicing is required
• Optional RAID-1 mirroring between two ioMemory modules on the same ioDrive Duo, offering complete redundancy on a single PCIe card

CAPACITY
The ioDrive Duo comes in the following capacities: 

• 160 Gbytes
• 320 Gbytes
• 640 Gbytes
• 1.28 TB (second half of 2009)

The ioDrive Duo will be available in April 2009. To find out more about how this and Fusion-io’s other enterprise solid-state storage products can benefit your organization, please visit Fusion site


Read More......

Burner Stalled After The Installation SP2

| 0 comments

Friday, 08 August 2008
The PROBLEM: not again old, Microsoft will launch Service Pack 3 for Windows XP. Namun, a reader CHIP even just menginstal Service Pack 2 (SP2). That too, instalasi­ him had problems. The arsonist in notebook Yakumo belonging to him (the model of UJ-811B from Matshita Panasonic) no longer functioned after the installation SP2 this. Nero did not want to burn the media or to cancel him after several times. Then, drive tray slid went out and software reported the message „unknown error“. 

 
The PROBLEM of the ARSONIST: Matshita UJ-811B did not function to Windows XP SP2. 
The diagnosis: CHIP tried checked this problem in CHIP Test Center.  After failing to burn CD for his three times, CHIP did update against all the component software. Firstly, CHIP used driver newest to chipset VIA in notebook, did update Nero 6 to the version 6.6.1.15a, and tried renewed firmware the arsonist. However, firmware this could not be transferred to notebook despite burner has been moved to the computer in CHIP Test Center.  
Be the same as, this arsonist used one firmware OEM especially for Yakumo and incompatible with Update-Tools from Matshita. Because update to Nero 8 could not also overcome the problem, CHIP then meng­uji the arsonist in notebook other that used chipset Intelligence 915M. Here, the arsonist could function without the problem, CD succeeded in being burnt completely. An arsonist DVD other also could work in notebook Matshita UJ-811B. Sebaliknya, Matshita gift equipment could not function completely.

The SOLUTION: the Possibility, his problem was compatibility between chipset notebook and the arsonist who was pointed in the conflict driver from SP2. The system could not in-downgrade to SP1 because of the number of security holes in Windows XP this version. After at length consulted, finally the owner decided to buy the new arsonist, to that is Sony NEC Optiarc AD-7543A. Selain support Double-Layer and to have the speed burnt that was high, firmware the arsonist already in the actual version. 
In Quote from : (Chip.co.id)

Read More......

TEST & TECHNOLOGY - HD vs Blue Ray 2

| 0 comments

page 2 of 2

The Density Of The Data

Both Blu-Ray and HD-DVD used the standard encoding the video (MPEG-2, MPEG-4, VC1 and other) as well as the standard audio Dolby, DTS, and other. The film also in-encode in the measurement and frame that was equal, namely 1920 and 1080 with 24 frame per second (fps). The two formats also the uniform in used long the wave of the blue laser 405 nm. Both of them will make small the picture to 720p (the resolution 1280x720) and converted above to frame rate the TV that required the appearance 30 fps or 60 fps.

Then where his difference? Was located in the measurement aperture (the hole peered) from the lens that was used to focus the laser and the thickness of the surface of the disk. Blu-Ray could focus the laser beam in a short manner that enabled the disk to keep the data was closer. This that made Bluray could devour 25 GB to single the layer and 50 GB to dual the layer. Compared with HD-DVD that only 15 GB and 30 MB in single and dual the layer.

The winner: Blu-Ray that became the winner of the DVD format war berdefinisi high.


Picture Quality
Still was related to the density of the data, the quality of the picture was also not free from the capacity of the two formats. With the difference of the storage capacity, the possibility was reduced by him the quality of the picture of the film could not be occasionally avoided. Each format of the compression required the film studio pulled interesting between the quality of the picture and made small the measurement of the data. The choice is, they could do encode the film was of high quality, but rather greedy storage space or reduced the quality of the picture of the film in order to save the requirement for storage space. However, not meant the picture was made small for the sake of storage space (continue to in the measurement and frame rate like above), but lowered “respons frekuensi” from pixel. f being represented, you bought two same films, one of them the film was of high quality and other the film was of barely adequate quality. The measurement of his picture continued to be equal, but you saw one of the films more was glad being watched.

If the film studio will record de­ngan the high quality and his film have a duration long, Could be disk space that remained did not suffice for additional material like the film that will come or other. For Blu-Ray, this was not the matter because he pocketed the bigger capacity. However for HD-DVD, this could become the problem. The film studio must choose whether reducing the quality of the picture or adding the second disk so that all of all film material enters (will have an impact the rise in the retail price).
Similar but not be the same: Physically, the two formats were similar, but were different in several capacities.

Why had two formats ?

Definitely as the consumer, we wanted only one format that was determined. For the film studio, this condition will also be troublesome because they must choose, his film used Blu-Ray or HD-DVD. Sebagai the consumer, if not having the two pemutar the disk berdefinisi high this certainly really will be dominated when buying the film.

However for the producer Blu-Ray and HD-DVD, the birth of the two formats that were hostile this actually because of the money factor. If each licence was from the format of their property sold, the big profit will be obtained by them.

Bit rate
Maximal bit rate Blu-Ray was 48 Mbit/the second, now HD-DVD limited in the figure 30.24 Mbit/the second. Increasingly fast bit rate, indicated as sharply and smooth the picture that was put forward or sua­ra that rumbled. If you had the film with bit rate that was low, his picture will not be smoother. Indeed the picture of HD-DVD could not be considered to be ugly, but Blu-Ray was more potential in pedalling the quality of the clearer picture and the voice was more thundering.

The support of the film studio
The world of entertainment indeed the biggest market at the same time the key to the business for the two formats berdefinisi high this. Up to the beginning of 2008 this, still had 3 film studios remained that supported HD-DVD, namely Paramount, Universal, and Warner Bros that released in the two formats. However, during Warner Bros broke only released the film in the Blu-Ray format, this the key to this war ending. And that was last, Toshiba as dedengkot HD-DVD then decided to stop the production and the development of HD-DVD last February 19 2008.

Source : from online magazine by chip.co.id

back to page 1

Read More......

TEST & TECHNOLOGY - HD vs Blue Ray

| 0 comments

Page 1 of 2

Behind the Blu-Ray Victory

The tragic end was experienced by HD-DVD. That just recently Toshiba made an announcement still did not produce HD-DVD. With this announcement Blu-Ray succeeded in whipping HD-DVD to become the single format of the DVD format the generation along with.
For the consumer that bought player Blu-Ray and PS3 game maniac's, this news was clearly pleasing. On the other hand, that was already done bought player HD-DVD, must be unfettered the chest because of not having again the film that was produced in the format of HD-DVD except that has been already done in the market.

In the close period, the film that has been already produced possibly his price will be thrown completely until his stock was completely sold. The consumer Xbox 360 then did not escape from the current was finished him HD-DVD. However, over this, the consumer still is not confused, what format that was chosen. Kenapa HD-DVD harus keok menghadapi Blu-ray? Mari kita simak ulasan berikut.

PS3 vs. XBOX 360
Until this report rose printed, did not yet have the official response from Microsoft related the "death" of the production of HD-DVD. Walau (untungnya) bukan dibundel di konsol game Xbox 360, HD DVD ini menjadi aksesori utama yang dibeli secara terpisah. Although (his profit) not was bundled up in the consul game Xbox 360, HD DVD this became main accessories that were bought separately. On the contrary, Sony as the rival Microsoft, each Playstation sale 3 (PS3) could be interpreted as the Blu-Ray victory figure because PS3 and Blu-Ray was sold in one package.

The death of the production of HD-DVD this, showed the wrong move to Microsoft in chose. However, if HD-DVD was bundled up to Xbox 360, Possibly the age of HD-DVD could be different because this consul was precisely sold on the figure 17 million all over the world, Exceeded PS3 that only 10.5 million (through to penghujung 2007 set). With this newest development, the sale PS3 definitely would ngebut, while Xbox looked for the alternative to the replacement HD-DVD. In the meantime, not easy for Microsoft to rescue sold Xbox 360 ini. will Microsoft be use Blu-Ray that also his rival's product in Xbox 360? The decision of the business that was not easy, but must we were waiting.

Anti the scratch
The Blu-Ray data that was kept in the disk media in the depth 0,1 mm from the surface was indeed shallower compared with HD-DVD that kept him in the depth 0,6 mm. However owing to the layer anti the well-off scratch, precisely the data that was kept in Blu-Ray was safer compared with HD-DVD. Moreover if receiving the rough treatment although, the Blu-Ray surface stayed smooth.

In YouTube, had had a person who for the fun of it tried scratched-scratch the Blu-Ray surface. However, the data inside still was continuing to be able to be read. To see his video, please was opened link www.youtube.com/watch? V=o5jEbZt6AIQ or searched with keyword “Blu-Ray the disc stress test”.


go to page 2

Read More......

Intel: Nehalem + SSD

| 2 comments

Finally, CHIP was brought together with the Nehalem platform. A meeting on June 6 2008, in space was closed in Hyatt, Taipei, brought CHIP met Nehalem. Your one remembered, Nehalem was a newest processor Intelligence that still has not 100% were finished the process of his planning. The plan is, Nehalem will be launch in the last of this year.

Platform Nehalem + Intel SSD RAID 0

Before moving towards the Nehalem platform, CHIP was offered the atomic platform (UMPC) and G45. Intelligence put forward that with the atomic platform, all the applications were still running smoothly and good. Moreover the quite heavy video even. In the meantime, in the platform G45, Intel put forward The presentation of the based film Blu-Ray direct from BD-ROM the USB connection. Utilisasi the processor E8400 appearance only achieved the range 25%. Very good, and still left "room to move" for the other application.

The Intelligence platform G45: VGA integrated was enough for the requirement for the HD video

Not there is general benchmark who was show by Intel. CHIP CHIP was offered the show that showed the SSD capability. In this exhibittion, was show a Nehalem platform with Tylersburg chipset with that fully a set of SSD into the RAID 0 configuration. This system is very fast, hawever when antivirus was operated, Utilisasi the Nehalem processor with 4 core and technology of HT/SMT (That already in-overclock was high enough) evidently still could achieve the range 90%! In fact when the Sony Vegas application was undertaken to process the High Definition video data (HD), Not was felt had the disturbance, as though CPU was processing the video data low-res. The Intel side said that this indeed was the matter that wanted to be put forward.
The Nehalem platform with 3 channel DDR3. Pay close attention to memory that was used

The conclusion from this demonstration was, SSD that was fast will make utilisasi the processor became higher. Of course, this happened because of the processor might not be waiting long to receive the data that was needed from storage and controller worked very hard to channel the data.

Motherboard Nehalem: Normal Component

From the first view of the Nehalem system, indeed not much information that could be revealed. The system that was used already in-overclock quite high but still could use one of HSF in casing was closed and PSU below 700 Watt.

Motherboard Nehalem: the Version of intelligence with POST indicator LED

Enough that was received by us from the Intel technicians that happily shared with CHIP. Several matters still could not be revealed at this time, and several other aspects will be revealed by us in Computex coverage furthermore.

source from: online magazine by chip.co.id


Read More......

MKwingzero Fans Visitor