This Latest i mac is realease with the some rocking feature

The new iMac features nearly twice the processing speed, advanced graphics, and ultrafast Thunderbolt I/O. You don’t usually find something this fast and sleek on a desk.

The latest tablest

The tablet it going to the new trend in the technology events it is the all of the record sales and the most sales in the this chrimas and the new year seson it going to the trend of the net book and the some other thing

The New iphon 4 is the best one

The new iphon4 is creating the waves in the sales fileld it make the record sales on the market the 4G is the one of the new genaratin make with best technologies it will make the good tomorrow.

Mac Book

The mac book is the world one of the number one laptop brand it have the most slim laptop in this world it done the most of the speed full laptop the apple maid the one of the best in this field

Nokia smart phone

The nokia smart phons makes the today phone technolagy they are .the world one of the best leading in the smart phone and the all type of the phones they maked the today smart phones

Glider Content

Monday, 26 December 2011

Crippled air force satellite;Risky rescue


A good news for you.
It was an epic space rescue that, in audacity and risk, echoed NASA’s campaign to save the astronauts aboard the doomed Apollo 13 moon mission. The biggest difference between the 1970 Apollo operation and the 14-month recovery of AEHF-1, an Air Force communications satellite, is that money was the only thing immediately at stake in the latter.
Granted, it was quite a lot of money: around $2 billion. And the satellite’s loss would also set back the Pentagon’s efforts to revamp its communications infrastructure as battle becomes more bandwidth-intensive.
The details of AEHF-1′s rescue, completed in October this year, are only now becoming clear as members of the Air Force team speak out. Saving the pricey, long-in-development comms satellite — one of a planned six-craft constellation meant to relay data between military forces scattered across the globe — involved some bold decision-making, a lot of creative engineering, not a little bit of luck and, last but not least, a steady supply of pizzas delivered to the Space and Missile Systems Center at Los Angeles Air Force Base, where military and contract space operators worked around the clock to plan the satellite’s recovery.
The brand-new Advanced Extremely High Frequency communications satellite (pictured) was 140 miles over the Earth’s surface before controllers knew anything was wrong. As far as the space operators knew, the Lockheed Martin-built satellite was functioning perfectly. It was Oct. 15, 2010, just one day after the 7-ton AEHF-1 had blasted into orbit atop an Atlas rocket. The controllers planned to activate the satellite’s hydrazine engine in order to alter the spacecraft’s flightpath, gradually transitioning from an oblong elliptical orbit to a circular, geosynchronous one allowing steady coverage of the Earth below.
But when the operators ordered the engine to ignite, nothing happened. They tried again, still nothing. They didn’t know it at the time, but a fuel line had become clogged. The blockage “was most likely caused by a small piece of cloth inadvertently left in the line during the manufacturing process,”according to the Government Accountability Office.
Repeated attempts to fire the engine very nearly caused an explosion. Just in time, David Madden, who oversees comms satellites at the Space and Missile Systems Center, consulted with his engineers and told the operators to stop trying the engine. “We’re very, very fortunate that satellite didn’t blow up,” Gen. William Shelton, head of Air Force Space Command, told Air Force magazine.
AEHF-1 was intact but stranded in a slowly decaying and useless orbit.
Madden told his engineers to figure out some way to salvage AEHF-1 — and not to leave their room at the Space and Missile Systems Center until they did. “We literally were shoving pizza under the door so that these guys could keep working,” Madden recalled.
A week later, they had a plan. Lt. Gen. John Sheridan, then the space center commander, approved it. The basic idea was to use the satellite’s small thrusters, intended for minor course corrections, to shift the orbit thousands of miles. It would take 450 separate maneuvers, carefully managed over a period of 14 months. ”AEHF-1 will be able to get to where it’s supposed to go,” analyst Mark Stout noted. “It’ll just take a year longer than planned.”
It was risky. “There’s no instruction manual for how to do that,” Madden said of the thruster strategy. “It’s basically an art.”
As the controllers inched AEHF-1 towards its correct orbit, Air Force officials began negotiations with Lockheed, seeking financial compensation. “It should not have happened,” Deputy Undersecretary of the Air Force for Space Programs Richard McKinney said of AEHF-1′s fuel-line blockage.
Soon, three new complications arose with the crippled satellite.
First, with each firing of its thrusters, AEHF-1 was held stationary, exposing it to greater amounts of sunlight — and potentially overheating the spacecraft. Madden’s people had to devise new maneuvers, periodically flipping the satellite to allow hot components to cool down.
Second, AEHF-1 risked running out of gas. Engineers wrote new software meant “to save every ounce of fuel,” according to Air Force‘s detailed account of the rescue.
Finally, the orbital shift required crossing paths with scores or even hundreds of other spacecraft. Air Force controllers from a separate unit handled traffic management while Madden and his people focused on the fuel and heat issues.
On Oct. 24, AEHF-1 reached its originally planned orbit. Testing began soon afterward. The Air Force expects to bring the satellite into service in March. Meanwhile, two more AEHFs are slated to launch in 2012.
After an initial bout of very bad fortune, the Air Force got “very lucky” with AEHF-1, service Undersecretary Erin Conaton said.
The space and flying branch might need that luck again very soon. Lacking its own production and launch facilities, the Air Force has no choice to but to trust Lockheed to get AEHF-1′s sister spacecraft right, Stout wrote. “While Lockheed is no doubt embarrassed, I don’t think they’re quaking in their boots as another five AEHFs are in the queue.”
Somewhere in Los Angeles, AEHF-1′s rescuers are no doubt holding their breaths, hoping they won’t have to repeat the yearlong feat of engineering derring-do that saved the Air Force $2 billion and preserved the Pentagon’s space communications systems.please read my blog and make review

Sunday, 25 December 2011

Tumblr is ruled by Indiana teen with wry comics


st_teentumblrstar_1

This is a trick for you to make money
Online media has delivered its share of stars. Myspace brought us Tila Tequila; Twitter delivered@shitmydadsays; and let’s not forget, before YouTube we were sad and Bieberless. Then there’s Tumblr. Founded in 2007, the popularity of the largely image-based blogging service has ebbed and flowed, but Tumblr never had a celeb to call its own. Until now.
Taylor-Ruth Baldwin, 17, created Hanging Rock Comics, a first-person chronicle of high school angst. The Star Wars-obsessed junior started her Tumblr last summer, posting comic panels from her diary. “It was a way of venting my frustrations,” Baldwin says. “I didn’t think much would come of it.” But she struck a nerve, and in a few months she had 15,000 followers. Her posts often get thousands of notes—reblogs, likes, and replies. One got more than 35,000, which is on par with posts by mainstream news orgs. And last fall, a Baldwin lookalike contest got 400-plus adults, kids, and animals aping her style of band T-shirt, big glasses, and braid.
It’s all the more surprising given this is her first foray into social media. “Before Tumblr, I didn’t check my email,” admits Baldwin, who loves “old” technology like VHS tapes, Walkmans, and records. Spider-Man is her favorite superhero, and her new life seems to mimic Peter Parker’s. “At school I disappear into the crowd,” she says. “I go home and there’s all these people online that like me.” Baldwin’s breakout isn’t the only sign that Tumblr’s own star is on the rise. Just weeks after securing $85 million in funding, the platform scored its most famous devotee: President Obama.

Saturday, 24 December 2011

FASTEST NONEXISTENT SUPERCOMPUTER BY AMAZON

 
A very glad news for you
The 42nd fastest supercomputer on earth doesn’t exist.
This fall, Amazon built a virtual supercomputer atop its Elastic Compute Cloud — a web service that spins up virtual servers whenever you want them — and this nonexistent mega-machine outraced all but 41 of the world’s real supercomputers.
Yes, beneath Amazon’s virtual supercomputer, there’s real hardware. When all is said and done, it’s a cluster of machines, like any other supercomputer. But that virtual layer means something. This isn’t a supercomputer that Amazon uses for its own purposes. It’s a supercomputer that can be used by anyone.
Amazon is the poster child for the age of cloud computing. Alongside their massive e-tail business, Jeff Bezos and company have built a worldwide network of data centers that gives anyone instant access to computing resources, including not only virtual servers but virtual storage and all sorts of other services that can be accessed from any machine on the net. This global infrastructure is so large, it can run one of the fastest supercomputers on earth — even as it’s running thousands upon thousands of other virtual servers for the world’s businesses and developers.
This not only shows the breadth of Amazon’s service. It shows that in the internet age, just about anyone can run a supercomputer-sized application without actually building a supercomputer. “If you wanted to spin up a ten or twenty thousand [processor] core cluster, you could do it with a single mouse click,” says Jason Stowe, the CEO of Cycle Computing, an outfit that helps researchers and businesses run supercomputing applications atop EC2. “Fluid dynamics simulations. Molecular dynamics simulations. Financial analysis. Risk analysis. DNA sequencing. All of those things can run exceptionally well atop the [Amazon EC2 infrastructure].”
And you could do it for a pittance — at least compared to the cost of erecting your own supercomputer. This fall, Cycle Computing setup a virtual supercomputer for an unnamed pharmaceutical giant that spans 30,000 processor cores, and it cost $1,279 an hour. Stowe — who has spent more than two decades in the supercomputing game, working with supercomputers at Carnegie Mellon University and Cornell — says there’s still a need for dedicated supercomputers you install in your own data center, but things are changing.
“I’ve been doing this kind of stuff for awhile,” he says, “and I think that five or 10 years from now, researchers won’t be worrying about administering their own clusters. They’ll be spinning up the infrastructure they need [from services like EC2] to answer the question they have. The days of having your own internal cluster are numbered.”

To Cloud or Not to Cloud
The old guard does not agree. Last month, during a round table discussion at the Four Seasons hotel in San Francisco, many of the companies that help build the world’s supercomputers — including Cray and Penguin Computing — insisted that cloud services can’t match what you get from dedicated cluster when it comes to “high-performance computing,” or HPC. “Cloud for HPC is still hype,” said Charlie Wuischpard, the CEO of Penguin Computing. “You can do some wacky experiments to show you could use HPC in that environment, but it’s really not something you would use today.”
But it is being used today. And Amazon’s climb up the Top 500 supercomputer list shows that EC2 has the capacity to compete with at least the supercomputers that are built with ordinary microprocessors and other commodity hardware parts. “Rather than building your own cluster,” says Jack Dongarra, the University of Tennessee professor who oversees the annual list of the Top 500 supercomputers, “Amazon is an option.”
Amazon’s virtual supercomputer wasn’t nearly as powerful as the massive computing clusters sitting at the peak of the Top 500. It could handle about 240 trillion calculations a second — aka 240 teraflops — while the machine at the top of the list, Japan’s K Computer, reaches 10 quadrillion calculations a second, or 10.51 petaflops. As Dongarra points out, clusters like the K Computer use specialized hardware you won’t find at Amazon or other supercomputers below, say, the top 25 on earth. “The top 25 are rather specialized machines,” Dongarra says. “They’re designed in some sense for a subset of very specialized applications.”
But according to Dongarra, you could still run these specialized applications atop Amazon. They just wouldn’t be quite as fast. And though some researchers and business need are looking for petaflops, others will do just fine with teraflops.

Clouds Meet PODs
The irony is that Charlie Wuischpard and Penguin Computing actually offer their own online supercomputing service. They call it Penguin-On-Demand. But this is a little different from Amazon EC2. In essence, Penguin is offering remote access to a specific set of machines running in one of its data centers, whereas Amazon offers access to a virtual infrastructure that shared among everyone using the service. “[POD] is not a virtualized resource,” Wuischpard tells us. “It’s especially built for high-performance computing workloads. Amazon is now trying to add this sort of thing to their toolkit, if you will, but I still think we have a leg up on them.”
The distinction between the two is rather difficult to get at. Ultimately, it comes down to two things: Penguin can tell you exactly where your application is running, and it has a long history with supercomputing. “There is a lot of difficulty in getting your application to run in the cloud,” Wuischpard says. “There’s network drivers and compilers and other stuff. You could figure out a lot of that on your own, but part of our aim with POD is to provide of expertise in building and running these machines to help our customers get on board and start using it.” According to Chuck Moore, a corporate fellow and technology group CTO at chip-designer Advanced Micro Devices, application will require a significant rewrite if you’re moving them from an old school supercomputer to a service like Amazon.
Some operations do prefer Penguin’s service to Amazon. Earthtime — a company that offers 3-D maps of the world much like Google Street View offers 2-D images — uses POD to generate these 3-D models, and company founder and chief technology officer John Ristevski cites Penguin’s support as a reason his company doesn’t use Amazon. “You need a certain level of support, help with things like loading data off out disks and tweaking the performance of the cluster to suit our needs,” he tells Wired. “That’s not something we’ll ever get from Amazon. Amazon is never going to manage the distribution of the jobs or the processing itself, which is something that Penguin does.”
But with Amazon, a company like Cycle Computing can provide this sort of help, and even Penguin CEO Charlie Wuischpard acknowledges that the gap between Amazon and dedicated supercomputers is shrinking. Amazon built its virtual supercomputer for the Top 500 list as a way of announcing a new type of virtual server instance on EC2 that’s specifically designed for HPC applications. It’s unclear how Amazon ran its benchmark tests for the Top 500 List — the company did not respond to multiple requests for comment — but it looks like they ran the tests on a new cluster of physical machines before they were actually added to Amazon’s public service. Amazon previously offered instances for HPC applications, but these new CC2 instances are even beefier.
Spin Up, Spin Down
The point is that Amazon is an option. And it’s a rather convenient option. For Jason Stowe, the CEO of Cycle Computing, the idea of building 30,000-core supercomputer with no hardware that costs just $1,279 an hour to run is something that can’t be ignored. “It’s just absurd,” he says. “If you created a 30,000-core cluster in a data center, that would cost you $5 million, $10 million, and you’d have to pick a vendor, buy all the hardware, wait for it to come, rack it, stack it, cable it, and actually get it working. You’d have to wait six months, 12 months before you go it running.”
And by that time, he says, your application may have changed. “Your question may have evolved since you first provisioned your infrastructure,” Stowe says. “You may need more than 30,000 cores.” The added twist is that after you spin up 30,000 machines on Amazon, you can just as easily spin them down when you don’t need them.
Stowe agrees that Amazon isn’t for everyone. He acknowledges that Amazon’s virtualization layer may put a real drag on certain applications — a dedicated supercomputer runs without virtualization — but he says there are far more applications that will run just fine on a cloud service. And any drag will be much less than the six to 12 months it would take to build a supercomputer — not to mention the expense. “Your application may run 5 percent slower,” he says. “But you’re still getting access to world-class compute power.”Please make review after reading this.

Total Pageviews

Followers