BW Communities

Articles for Technology

New, Clear And Safe?

If you are an apologist of nuclear energy, pause for a moment to consider these statistics. The world has 442 nuclear power plants that together produce 374 gigawatts (GW). Under construction are another 65, which will produce 62 GW. A typical plant also produces about 20 tonne radioactive waste in a year. This needs to be stored safely for near-eternity, if we think in practical terms. It does not need too much ingenuity to realise we would run out of space to store the waste if the industry expands rapidly.Look at India's nuclear ambitions. India gets 4 per cent of its electricity from nuclear energy. This will rise to 10 per cent by 2022 and 25 per cent by 2050, a plan that needs a consistent growth of 9.5 per cent a year for the next four decades. By then, India will be producing 470 GW of nuclear-based electricity, probably making it the largest user of nuclear power in the world. While many analysts consider this plan a piped ream, even partial success of it will generate huge waste. Given India's high population density, where will it store the waste? Fortunately, next- generation nuclear technologies can reduce the waste to manageable levels.For example, at the University of Texas in Austin, the Institute of Fusion Studies has been researching the problem. But instead of developing methods for safe burial of the waste, it looks at using it as a fuel in another reactor. It has designed a system that can burn 90 per cent of this nuclear waste, while also reducing the time the waste remains radio-active from centuries to decades. This techno-logy has attracted interest from all over the world, especially from India and China. In fact, the Department of Atomic Energy is planning to send a team to work in this lab.The institute has developed a hybrid nuclear reactor that combines nuclear fission and fusion to produce energy. Fusion is the way the Sun gets its energy; it happens when two atomic nuclei combine. In nuclear fission, an atomic nucleus splits into two or more. Both processes release enormous amounts of energy, but fusion is relatively safe as it produces far less radioactive waste than fission. But it is also more difficult to achieve controlled fusion.At the Institute of Fusion Studies and the University of Texas department of physics, scientists use fusion not as an energy source, but as a method to produce neutrons. It is the neutrons that split the nuclei, and lack of sufficient neutrons is the reason why we end up with the waste in the first place. As the byproducts of fission accumulate, they start absorbing neutrons without splitting. If we have a powerful external neutron source, the by-products of fission — which are highly radioactive — can be burned further. "We will never reach a situation where we have no waste, but we can reduce it to manageable levels," says Swadesh Mahajan, senior research scientist at the Institute of Fusion Research.The hybrid reactor, which combines fission and fusion in one device, has been a concept from the 1950s, but technology had not advanced enough then. Now hybrid reactor concepts have advanced to design and engineering stage. Three major groups work on this: Nuclear engineer Weston Stacey's group at Georgia Tech University, the Institute of Fusion Research in Texas University and the Institute of Plasma Physics at Hefei in China. All three have made major advances, but the Texas group has recently made a breakthrough that could lead to a real hybrid system soon.The group has designed Super X Divertor, a fusion device that is small enough to be lifted by a crane and put inside a blanket of fissile material. You can test this idea in two years, instead of the usual 10 years. There are, of course, several hurdles to cross before reaching a working hybrid, but experts do not consider any of them insurmountable. In fact, hybrid may become a necessity if we accumulate unprocessed waste the way the light-water reactors do at the moment.The beauty of the hybrid approach is that you need only one such reactor for every 15 conventional reactors to cut waste significantly. But the conventional reactor technology is itself going through a large shift. Almost all reactors are of second generation, which use Uranium 235 as fuel and krypton and barium as byproducts. They also produce elements such as plutonium that remain radioactive for hundreds of thousands of years. These are dangerous, and need enormous safety precautions. Most reactors have the spent but unprocessed fuel stored nearby. The Japanese Fukushima reactors would not have been so affected had they stored the waste as dry pellets elsewhere. Many such problems are taken care of in the fourth generation of reactors that will be tried over the next two decades.Three basic kinds of fourth-generation reactors are being developed: the gas cooled, water cooled and the fast reactors; all notable for their simplicity and safety features. For example, ‘pebble-bed' (gas-cooled) reactors have only two subsystems compared to 200 in light-water reactors.Fast reactors allow us to move away from using only uranium as the fuel source. "Physics does not tell us that only uranium can be used as a fissile material," says P.K. Iyengar, former chairman of India's Atomic Energy Commission. India is developing a fast reactor that will use thorium as a fuel. Thorium is not fissile, but can be converted into the fissile uranium 233 inside the reactor. Fast reactors can handle an accident like the loss of coolant much better. It generates negligible amounts of plutonium and very little long-lived waste. Fourth-generation reactors could be the future of nuclear energy. Says Robert Grimes, professor of materials physics at Imperial College in London: "We may need to enter a phase of reactor building in two decades. And for this to happen, we need to excite young people to take up nuclear engineering as a career now."

Read More
Inside A US Healthcare "Island Of Excellence"

Larry Brubaker suffered a massive stroke in March and was hospitalised for nearly a month before being moved to an acute rehabilitation facility, then to a nursing home and finally to his own home near Sunbury, Pennsylvania. A former warehouse worker, Brubaker, 64, cannot walk, has little movement in any of his limbs, and has lost some powers of speech and hearing. He is being cared for by his wife Kay, who worked as a maid before retiring. What's unusual is who is instructing Kay on how to administer medications and other routine care for her husband, as well as helping her navigate the maze of local agencies that provide social services like nursing and physical therapy -- her insurance company, Geisinger Health Plan, which is part of Geisinger Health System. In his healthcare speech to Congress on Sept. 9, President Barack Obama cited Geisinger as a possible model for national reform. Based in central Pennsylvania, a rural region once dominated by coal mining, the system has recently earned a reputation for high-quality care at a lower-than-average cost. The White House refers to it as an "island of excellence" in the nation's murky healthcare waters. And unlike other highly touted health care providers, such as the Mayo Clinic, Geisinger isn't helped by well-heeled patients flying in from, say, Dubai. In fact, Geisinger serves 2.6 million people in 42 largely rural counties. Yet for all its success, the Geisinger formula won't easily catch on nationally. One of the main obstacles, according to numerous experts, are doctors themselves. Most physicians prefer or at least are accustomed to the longstanding fee-for-service model -- and likely would blanch at Geisinger's salary-based compensation. Even so, Geisinger Health System shows just how much can be done at a local level to curb runaway US health costs. House Calls Kay Brubaker, 65, finds caring for her husband a daunting challenge. She has to watch for signs of the aspiration or pneumonia that are typical with bedridden patients. Kay also has to take his blood pressure and administer the right doses of medications -- 13 of them. But she has lots of help. Geisinger's Medical Home programme works to keep patients like her husband out of hospitals and nursing homes, and in their own homes, where they can be cared for by relatives or visiting nurses at far lower costs. The programme's results are clear. Hospital admissions fell from 375 per 1,000 Medicare patients in 2006 to less than 350 the following year, saving 7 per cent in medical costs, while patients outside the programme rose to more than 400 admissions per 1,000 patients. To make the system work, Geisinger employs case managers like Jennifer Chikotas, a nurse who coordinates Brubaker's care. She is available by telephone 24/7 for advice or support -- and even arranges transportation to doctors' offices. "There's a lot of room for error or confusion," Chikotas said during a visit to the health plan's office in Selinsgrove, Pa. "Social Security, for example, wouldn't speak to Kay without the paperwork." Keeping a patient like Brubaker at home, most agree, does more than just lower costs. "The last place he needs to go back to is a hospital," Chikotas said. "It would open him up to more infection." Best Practices Established in 1915, Geisinger now has about 13,000 employees at 36 community practice sites and three hospitals. A private nonprofit organization, it reported revenue of $2.1 billion -- and an excess of revenue over expenses of $34.6 million -- for the fiscal year ended June 30, 2009. Medical authorities inside and outside Geisinger credit the system's performance to three factors: its salary-based compensation for physicians; an electronic medical records system that reduces the likelihood of treatment duplication by integrating the services of doctors, nurses and administrators; and best-practice protocols that require doctors to follow accepted standards for certain kinds of treatment. To cite one example: for a coronary artery bypass graft operation without a valve, Geisinger's average cost with a 5.3-day hospital stay was $88,055 in 2007. That compares with $370,502 over 8.1 days at Hahnemann University in Philadelphia, and $108,667 over 4.6 days at Lehigh Valley Health Network in Allentown, Pa., according to the Pennsylvania Health Care Containment Council. The ProvenCare best-practice protocol not only lowers costs but produces better health outcomes. For coronary artery bypass graft operations, for example, hospital readmissions fell 44 per cent in the first year of the ProvenCare operation. The complication rate dropped by 21 per cent and the average hospital stay shrank to 5.7 days from 6.2 days. In the field of perinatal care, the proportion of births by caesarian section has decreased to 23.5 per cent from a baseline of 36.6 per cent since the protocol was introduced for that specialty in October 2008, reducing the likelihood of a mother having subsequent C-sections. Role Models Larry McNeely, a healthcare advocate for the US Public Interest Research Group, which campaigns for citizens' rights, said national health reform should and could follow the Geisinger example. "I think it's a model that makes a lot of sense all over the country," he said. Like others, McNeely argues that the key to reforming the US health system is to change the way doctors are paid. The point would be to reward the quality of care, as Geisinger does, rather than the quantity of procedures, drugs or consultations. "The real barrier is the current payment system," McNeely said. "If you do what Geisinger does -- provide better quality -- you are not rewarded, you are punished. The reward goes to the doctors who order the most care." Geisinger's approximately 680 physicians are compensated with salaries and performance-related bonuses, rather than the traditional fee-for-service model. And they are rewarded for meeting ProvenCare standardrs. Winging it is not an option. In treating diabetes, for example, physicians are required to follow a nine-step checklist that is designed to ensure the best outcome for the patient -- and reduce future treatment costs. "We tell our doctors, 'If you do these nine things well, your patients' complications are diminished,'" said Dr. Howard Grant, Geisinger's chief medical officer. The protocols have allowed Geisinger to meet best-practice standards more often than most other systems, according to a study by the Pennsylvania Health Care Quality Alliance, a group of healthcare providers and insurers that seek to establish uniform standards. For heart-attack care, Geisinger performs recommended treatments 98 per cent of the time, compared with 81 per cent in Pennsylvania hospitals overall, and 79 per cent nationwide, the organization found. Geisinger is not without its critics, some of whom lambast the ProvenCare system as "cookbook medicine" that marginalizes a doctor's professional judgment -- a claim Geisinger dismisses. "It's not set in stone," Grant said. "But the presumption is that you will follow the pathway unless there is a compelling reason not to." Another potential barrier to widespread adoption of the Geisinger model is the expense of setting up an electronic medical records system. At Geisinger, all parties -- patients, physicians, nurses, administrators, and the internal insurance plan -- have timely access to each patient's medical history. The system, which has cost about $100 million since it was installed in the mid-1990s, is designed to prevent duplication of procedures and improve the coordination of care. For example, the electronic system allows emergency-room doctors to peruse a patient's history, allowing them to make a better-educated judgment about whether to admit that person. By contrast, paper records are typically not available to ER staff, so they are more likely to err on the side of caution and admit a patient, adding unnecessary costs. At Geisinger, inappropriate hospital admissions have fallen by 40 per cent since the system was introduced, Grant said. The electronic system also promises better care and lower costs in future by allowing doctors to be more proactive, Grant said. A rheumatologist, for example, can use the system to identify women who are at risk of osteoporosis, and then initiate preventive treatment. "We think that over time, we will see a significant reduction in the number of people who have hip fractures," Grant said. Part of the system is a facility called My Geisinger, which allows patients to email doctors, access their own medical records and make appointments. It also allows nurses like Erin Hubbert to deal with minor complaints that probably don't need the attention of a doctor or an office visit. At a Geisinger clinic, Hubbert was working at a computer terminal, instructing a patient who had used the system to notify the clinic of a case of diarrhea to drink clear fluids for 24 to 48 hours.

Read More
Specialist Advantage

Read More
Chipping 3D

When the gadget world moves towards 3D, why should the transistors on which they are built be left behind? One good reason is that it is difficult to build 3D transistors. Also, it is risky. But Intel has chosen to ignore these arguments and build the world's first 3D transistors. They will appear towards the end of this year in Intel chips for the first time.Intel is a loner in this move towards 3D chips. The company wants to continue Moore's Law that predicts that the performance of integrated circuits will double every two years. Intel cannot increase the clock speed of its processors continually because the energy consumption becomes too high. It is rapidly reaching the limits of its current technology, and so, among other things, it decided to expand the transistor in another dimension.There is one major reason for this decision. It is getting increasingly difficult to shrink things any more without paying a heavy price. To get more space, Intel engineers built what is generally known as fin-field-effect transistor. It is named such — for the first time by some University of California professors — because the gate of the transistor consists of a vertical fin around which wraps the conducting channel for electrons. Intel does not call it fin-field transistor, though, calling it tri-gate transistor instead. It is named so because it has three gates: one on each side of the fin and one on top.Intel says that the thin three-dimensional fin with three gates lets it control the current far better than in a flat-surface channel. In practice, what this means is that more current can flow when it is in the ‘on' state and close to zero current when it is in the off state.This extra current translates to higher performance and the near-absence of current in the ‘off-state' translates to lower power requirements. It can also switch between the two states quickly, which also increases the performance. According to Intel, the performance can increase by 37 per cent at low voltage and the energy consumption can decrease by 50 per cent.Intel likens the move towards 3D transistor like building skyscrapers when we run out of space on the ground. The fins are very thin and go upward, and so they can be packed close together. However, in spite of Intel's confidence with its research and development, other semiconductor firms think 3D transistor technology still has some way to go. To improve performance, they are instead pursuing a method first implemented by IBM in 1998, called Silicon On Insulator (SOI) technology. In essence, it involves the use of an insulator below the silicon junction. This method is compatible with current manufacturing techniques and is thus less risky, but Intel has chosen the riskier and more powerful option.Along with 3D, Intel has also announced a shift to 22-nanometer technology. These two combined should give a considerable boost to the performance of its chips. It remains to be seen how the chips will perform in practice.Intel has had a few hiccups of late, the latest one a flaw in the Sandy Bridge processor design discovered early this year. However, the tri-gate transistor gives Intel an option in the mobile world, where it can provide high performance chips without too much increase in energy consumption. It can also increase its dominance in the PC world with a substantial boost in chip power. And it can do this while continuing the parallel development of multicore chips. Intel is yet to explain plenty of things. So it remains to be seen how good they will be in mobile phones when compared to ARM chips. It remains to be seen what the consumer will buy.(This story was published in Businessworld Issue Dated 16-05-2011)

Read More
New Rays

Starting from sometime in the past century, when human beings became aware that solar energy is non-polluting and abundant, companies and research and development institutions have been spending considerable money and energy to make it cheap as well. Engineers have tried a slew of approaches to harvest solar energy, but with incremental improvements. Now there is a solar energy technology, nantenna, that promises to deliver a magnitude increase in performance — if it goes from a prototype to a commercial version.Developed by a team of scientists led by Patrick Pinhero from the University of Missouri, it has a new approach towards using solar energy. Solar cells, no matter what they are made of, use the property of some materials to produce electricity — called the photoelectric effect — when exposed to sunlight. Another kind of solar energy technology, solar thermal, uses the Sun's energy to heat water and then produce electricity through a turbine. The Missouri invention uses a series of tiny antennas that can absorb Sun's radiation and generate electricity. There is, however, a crucial difference here compared to photovoltaic cells: the antenna can use 90 per cent of the radiation, while photovoltaic cells can use around 20 per cent.It is a revolutionary technology, even though it is early days for it, and the scientists think it will take five years before it can become a commercial device. Yet, there are many uses for it in the meantime. It could produce electricity from waste heat in factories. It could detect contraband materials in airports. You could use them in the skin of electronic devices for continual charging. The nantenna has its uses in security applications as well, and some of these could make their way to the world before a large-scale energy harvesting device does. But it can become a game-changer if it ever makes it to the power-generation market. And there are no serious theoretical bottlenecks obvious now.The idea of a nantenna — a short form for nano-antenna — originated in the 1970s. Tiny antennas of the right material can absorb radiation of specific frequencies proportional to their size. By designing antennas of different sizes, one can design systems that can absorb radiation of different frequencies. The absorbed radiation produces an alternating current that is converted to direct current using a rectifier. However, there are enormous technological challenges to designing tiny antennas that can absorb a visible and infrared radiation from the Sun. Pinhero and his team are supposed to have surmounted several such challenges and developed a prototype. They were assisted by engineers from the University of Colorado, who developed a diode to act as a rectifier, and private firm MicroContinuum in Massachusetts, which is developing high-volume manufacturing process.The biggest advantage of this technology is it can use radiation that lies outside the scope of photovoltaics. A substantial amount of radiation reaches the Earth's surface between wavelengths of 800 and 900 nanometeres, even when it is cloudy, but it is very difficult to design solar cells that can use this radiation. You can design panels with antennas that absorb Sun's radiation on one side and the Earth's radiated heat on the other. As the Earth radiates heat at night, the device could produce some electricity even at night. Pinhero's team has fabricated the device on a flexible substrate, making it ideal for installations on rooftops and other places. So it pays to keep a watch on the development of nantennas. They could power the world one day.(This story was published in Businessworld Issue Dated 30-05-2011)

Read More
Robotics: A Step Up

Amanda Boxtel, a wheelchair user for 19 years, is about to stand up. A skiing accident in Aspen, Colorado, left her paralysed from the waist down  four vertebrae shattered and a bruised spinal cord. She slowly pushes herself out of the chair with crutches, teeters backward for a second, then leans forward again and takes a step. Throughout my paralysed life, I have figured out how to adapt to a world of sitting, and to play hard from a recumbent position. Now, I have the power to enable me to stand, step out and walk, says Boxtel, who has been through years of physical therapy and experimental treatments in the hope of one day being able to stand up again. The robotic prosthetic legs are worn over ordinary clothing and simple velcro straps (made of nylon fabric), backpack-style clips, and shoulder straps secure you in. The legs are driven by four motors, one for each hip and knee. The ankle joint is controlled with passive springs that keep the foot angled so that it can be placed on the ground, heel to toe, as the leg steps. Sensors in the legs then relay position information to the control unit, which determines how to bend the joints and, in turn, walk. While the device can support a wearer weight, balance is left to the person, via crutches, which also serve to control the system. To take a step, the wearer pushes down with the crutch opposite to the intended stepping leg. Similar gestures, such as pushing down on both crutches simultaneously, allow the wearer to transition from sitting to standing, or to make turns. It is the first such device in the world to do so without a tether. For now, the device is only available at a few rehabilitation centres in the United States, where it is being tested under the supervision of trained physical therapists, as part of a clinical trial. But if things go well, eLEGS could be available commercially by 2013. We are anxious to field the technology starting with premiere rehab facilities in the coming months. Ultimately, we foresee the day when people will begin using eLEGS beyond the in-patient setting and start using it day-to-day in the real world, says Fogelin.

Read More
Cloud Clout

When you start talking about Microsoft, it is often tempting to compare the IT giant with its closest rivals. In the 1990s, most comparisons were with its old foe Apple. There were countless arguments then about the benefits and drawbacks of Windows versus the Mac OS. Then, for a brief while, comparisons were between Linux and Windows, a line of argument that has not died down completely or probably never will. Now, in the cloud age, Microsoft products are compared with that of Google. Specifically, its Office products are weighed against Google Apps for Business, although the two sets of productivity software are very different in some ways. Now, there is a reason to continue these comparisons.A few days ago, Microsoft released the beta version of Office 365, its cloud-based offering for the Office Suite. Before we look at this more closely, it is worth looking at what it is not. It is quite different from the Google Apps for Business. It is not a pure Web version of Office 2010 either. Yet, it is a response to Google Apps for Business and the increasing tendency to use cloud-based versions of software. Microsoft offers these services in its own unique way, which could be exhilarating or frustrating, depending on your techno-political affiliation. If you leave these affiliations behind, Office 365 is like any other cloud-inspired software product: some good features, some not so good ones, and many others you are not sure about.Office 365 is a product aimed at small businesses and is the next version of Microsoft's business productivity online services (BPOS). It consists of Office Web Applications and online versions of SharePoint, Exchange and Lync. Like with Google Apps for Business, Office 365 also has a market place that helps users find partners and other applications. Office Web Apps is the online version of Microsoft Word, PowerPoint, Excel and OneNote. SharePoint Online is a set of collaboration tools. Exchange Online is a hosted service of email, calendar and contacts. Lync Online is an instant messaging and communications service. So, Office 365 has an extremely broad set of features. It has unified communications, business intelligence, content management, collaboration tools, enterprise search and other features. It also offers the option of purchasing Office as a subscription.Microsoft, thus, has three sets of offerings related to the Office Suite. One is the outright purchase of a version of Office in the conventional way. The second option, particularly for a consumer, is to use Windows Live. It has a large set of products including limited word processing, mail and Outlook Connecter for free, but with some ads. The third is Office 365, where you get the full Office Suite and other products for a subscription. It is here that Office 365 differs from the Google Apps for Business, which has no downloadable productivity suite. If you are on Google Apps but offline, there isn't anything that you can do. But with Office 365, you can continue to use the Office products as long as you have paid the subscription fee. But Office 365 makes commercial sense only for a company with at least 25 employees.Office 365 integrates your PC with mobile platforms such as a Windows 7 phone or iPhone. It works with Mac as well. For $6 per user, you also get 25 GB of storage free. A subscription also obviates the need for frequent upgrades. The launch of Office 365 is, thus, a significant move for Microsoft. We have to wait and see how Google and others react.

Read More
Apple, Google Tap Phone Location Data: WSJ

Apple's iPhones and Google's Android phones phones regularly transmit such data to Apple and Google as the two build databases that could help them tap a market for location-based services, the Journal reported, citing data and documents the Wall Street Journal had analyzed. The paper cited a research by security analyst Samy Kamkar that said the HTC Android phone sent such information several times every hour after collecting the data every few seconds.Google and Apple were not immediately available for comment to Reuters late on Thursday. The phone also sent information about wireless Internet networks in the area, the paper said.The iPhone transmits data about the user's location and Wi-Fi networks to itself every 12 hours, the paper said.(Reuters)

Read More

Subscribe to our newsletter to get updates on our latest news