RSS

Category Archives: Technology

What’s actually wrong with email?


A lot has been written about the problems in our current email platforms. A quick search with “problems with email” on any of the prominent tech blogs will populate a whole lot of articles convincing us why email is no more good. Why it isn’t working anymore? Have a quick look at the reference links given below and you will get a hang of what are we taking about.

These and more of such articles left me pondering, “Is email actually bad?” And if yes, why isn’t anyone been able to fix it.

Read here to find the answers and solutions from a startup working to make Handling Mails more easier and manageable …

http://www.blog.metisme.com/7/problem-with-email

Advertisements
 
1 Comment

Posted by on October 3, 2012 in Technology, Web Development

 

Tags: , , , , , , , , , , , , , ,

CHI ’11: Enhancing the Human Condition


May 9, 2011 8:45 AM PT                                                                      (courtesy: MICROSOFT RESEARCH)

The Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI 2011), being held May 7-12 in Vancouver, British Columbia, provides a showcase of the latest advances in human-computer interaction (HCI).

“The ongoing challenge,” says Desney S. Tan, CHI 2011 general conference chair and senior researcher at Microsoft Research Redmond, “is to make computing more accessible by integrating technology seamlessly into our everyday tasks, to understand and enhance the human condition like never before.”

Microsoft Research has a consistent record of support for CHI through sponsorships and research contributions. This year, Microsoft researchers authored or co-authored 40 conference papers and notes, approximately 10 percent of the total accepted.

This comes as no surprise to Tan.

“Microsoft Research’s goal,” he says, “is to further the state of the art in computer science and technology. As the realms of human and technology become more and more intertwined, Microsoft Research has focused more and more of our effort at the intersection of human and computer, and this is evident from our researchers’ level of participation.”

One unusual contribution comes from Bill Buxton, Microsoft Research principal researcher. Items from Buxton’s impressive accumulation of interactive devices are on display in an exhibit titled “The Future Revealed in the PastSelections from Bill Buxton’s Collection of Interactive Devices.”

Effects of Community Size and Contact Rate in Synchronous Social Q&A, by Ryen White and Matthew Richardson of Microsoft Research Redmond and Yandong Liu of Carnegie Mellon University, received one of 13 best-paper awards during the conference, as did Your Noise is My Command: Sensing Gestures Using the Body as an Antenna by former Microsoft Research intern Gabe Cohn and visiting faculty member Shwetak Patel, both from the University of Washington, along with Dan Morris and Tan of Microsoft Research Redmond. One of two best-notes awards went to Interactive Generator: A Self-Powered Haptic Feedback Device, co-authored by Akash Badshah, of the Phillips Exeter Academy, a residential high school in Exeter, N.H.; Sidhant Gupta, Cohn, and Patel of the University of Washington; and Nicolas Villar and Steve Hodges of Microsoft Research Cambridge.

The Touch-Sensitive Home

Imagine being freed of physical attachments to input devices because your body isthe input device. One approach is to put sensors on the body. The challenge then is to separate actual “signal” from “noise,” such as ambient electromagnetic interference, which overwhelms sensors and makes signal processing difficult. InYour Noise is My Command: Sensing Gestures Using the Body as an Antenna, the researchers turned the problem on its head.

“Can we use that electrical noise as a source of information about where a user is and what that user is doing?” Morris recalls asking. “These are the first experiments to assess whether this is feasible.”

Human body as antennaThe human body behaves as an antenna in the presence of noise radiated by power lines and appliances. By analyzing this noise, the entire home becomes an interaction surface.

The human body is literally an antenna, picking up signals while moving through the noisy electrical environment of a typical home. The researchers tested whether it is possible to identify signals with enough precision to tell what the user is touching and from where. To measure those signals, the researchers placed a simple sensor on each study participant and recorded the electrical signals collected by those sensors. Laptop computers carried in each person’s backpack collected data as the participants performed a series of “gestures,” such as touching spots on walls and appliances or moving through different rooms.

Next came determining whether analysis of this data provided the ability to distinguish between gestures and locations. It was possible in many cases to recognize participants’ actions based solely on the ambient noise picked up by their bodies. For example, once a participant “taught” the algorithms about the noise environment around a particular light switch by demonstrating gestures around the switch, it was possible to determine which of five spots near that switch the user was touching, with an accuracy of better than 90 percent. Similarly, researchers could identify in which room a participant was present at any given time with an accuracy exceeding 99 percent, because the electrical noise environment of each room is distinct.

“It was quite a gratifying series of results,” Morris says. “Now, we are considering how we can package this up into a real-time, interactive system and what innovative scenarios we can enable when we turn your entire home into a touch-sensitive surface.”

The Patient as Medical Display Surface

Reports from the World Health Organization and the American Medical Association confirm that patient noncompliance is a major obstacle to successful medical outcomes in treatment of chronic conditions. Doctor-patient communication has been identified as one of the most important factors for improving compliance. The paper AnatOnMe: Facilitating Doctor-Patient Communication Using a Projection-Based Handheld Deviceifocuses on understanding how lightweight, handheld projection technologies can be used to enhance doctor-patient communication during face-to-face exchanges in clinical settings.

Body, model, and wall as medical display surfacesThree presentation surfaces: a) body, b) model, and c) wall.

Focusing on physical therapy, co-authors Tao Ni of Virginia Tech—a former Microsoft Research Redmond intern—Amy K. Karlson of Microsoft Research Redmond, and Daniel Wigdor, formerly of Microsoft Research Redmond and now at the University of Toronto, spoke with doctors to understand general communication challenges and design requirements, then built and studied a handheld projection system that flexibly supports the key aspects of information exchange. Doctors can direct handheld projectors at walls or curtains to create an “anywhere” display, or at a patient to overlay useful medical information directly atop the appropriate portion of the anatomy for an augmented-reality view, or “virtual X-ray.”

Reviews and formal lab studies with physical therapists and patients established that handheld projections delivered high value and a more engaging, informative experience than what is traditionally available.

“This is an interesting new space,” Karlson says, “because, despite the prevalence of technology in many medical settings, technology has been relatively absent from face-to-face communication and education opportunities between doctors and patients.

“The coolest part was hearing the positive reactions from study participants when we projected medical imagery directly onto their arms and legs. We got, ‘Wow!’ ‘Cool!’ and ‘I feel like I am looking directly through my skin!’ There seems to be something quite compelling and unique about viewing medical imagery on one’s own body.”

Touch-Free Interactions in the Operating Room

The growth of image-guided procedures in surgical settings has led to an increased need to interact with digital images. In a collaboration with Lancaster University funded by Microsoft Research Connections, Rose Johnson of the Open University in Milton Keynes, U.K.; Kenton O’Hara, Abigail Sellen, and Antonio Criminisi of Microsoft Research Cambridge; and Claire Cousins of Addenbrooke’s Hospital in Cambridge, U.K., address the problem of enabling rich, flexible, but touch-free interaction with patient data in surgical settings. The resulting paper, Exploring the Potential for Touchless Interaction in Image-Guided Interventional Radiologyhas received a CHI 20 Honorable Mention paper award.

During treatments such as interventional radiology, images are critical in guiding surgeons’ work; yet because of sterility issues, surgeons must avoid touching input devices such as mice or keyboards. They must navigate digital images “by proxy,” using other members of the surgical team to find the right image, pan, or zoom. This can be onerous and time-consuming.

Complex surgical collaborative environmentThis view toward an X-ray table from a computer area shows a surgical team and the complex collaborative environment that touch-free interactions must address.

The research team began fieldwork with the goal of understanding surgeons’ working practices. The researchers are collaborating with surgical teams to develop and evaluate a system. Touchless-interaction solutions such as Kinect for Xbox 360 offer opportunities for surgeons to regain control of navigating through data. There are many challenges, though, in terms of enabling collaborative control of the interface, as well as achieving fluid engagement and disengagement with the system, because the system needs to know which gestures are “for the system” and which are not.

“The most intriguing aspect of this project,” Sellen says, “is the potential to make a real impact on patient care and clinical outcome by reducing the time it takes to do complicated procedures and giving surgeons more control of the data they depend on. From a technical side, it is exciting to see where technologies like Kinect can realize their value outside of the gaming domain.”

Use Both Hands

Touch interfaces are great for impromptu casual interactions, but it is not easy to select a point precisely with your finger or to move an image without rotating it unless there are on-screen menus or handles. In the world of touch, though, such options are not desirable, because they introduce clutter. Rock & RailsExtending Multi-touch Interactions with Shape Gestures to Enable Precise Spatial Manipulations, by Wigdor, Hrvoje Benko of Microsoft Research Redmond, and John Pella, Jarrod Lombardo, and Sarah Williams of Microsoft, proposes a solution by using recognized hand poses on the surface in combination with touch.

“Rock and Rails” is an extension of the touch-interaction vocabulary. It maintains the direct-touch input paradigm but enables users to make fluid, high degree-of-freedom manipulations while simultaneously providing easy mechanisms to increase precision, specify manipulation constraints, and avoid occlusions. The tool set provides mechanisms for positioning, isolating orientation, and scaling operations using system-recognized hand postures, while enabling traditional, simple, direct-touch manipulations.

Augmenting traditional manipulation techniques with recognized hand posturesThe Rock & Rails paper augments a) traditional direct-manipulation gestures with independently recognized hand postures used to restrict manipulations conducted with the other hand: b) rotate, c) resize, and d) 1-D scale. This enables fluid selection of degrees of freedom and, thus, rapid, high-precision manipulation of on-screen content.

The project was a collaborative effort between Microsoft Research and the Microsoft Surface team, so the researchers were able to test their work on real-world designers—the intended audience.

“One of the best moments of the project,” Benko recalls, “was when we realized our gestures could be made ‘persistent’ on the screen. We had transitioned from the model where you had to keep the pose of the hand in order to signal a particular option, to a more relaxed mode where the user could ‘create’ or ‘pin’ a proxy representation of a gesture. This allows users to perform all sorts of wacky combinations of operations without needing to hold the gesture for a long period of time.”

These are just a few of Microsoft Research’s current investigations in how to enhance the ways people can interact with computing devices.

“HCI is all about discovering and inventing technologies that deeply transform people’s lives,” Tan concludes. “Microsoft Research is committed to advancing the state of the art in human-computer interaction.”

Thanx to http://research.microsoft.com/en-us/news/features/chi2011-050911.aspx for this content.

 
Leave a comment

Posted by on May 9, 2011 in Technology

 

Tags: , , , , , , , , , , ,

Web Development Tutorials


Hello frnds…

Recently I have come across a really interesting website which provides free tutorial for all the web development technologies. This site provides u with simple tutorials that can be easily studied …and will be helpful even for those who are new to web development…or do not know any basics. Here is a brief list of free tutorials available:

W3Schools Tutorials
« Full Sitemap References »

References

HTML & CSS

TUTORIALS

References

XML Languages

TUTORIALS

References

Web Services

TUTORIALS

References

Browser Scripting

TUTORIALS

References

Server Scripting

TUTORIALS

References

Multimedia

TUTORIALS

And one more advantage is that u can experiment with ur codes there itself….so it will be a practical learning too.

I’ve heard most of great developers have started their work from here…so lets give it a try…its penniless and u even don’t have to bother about stepping out of ur home

Link : www.w3schools.com

 
1 Comment

Posted by on December 21, 2010 in Technology, Tutorials, Web Development

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , ,

How Motion Detection Works in Xbox Kinect


How Motion Detection Works in Xbox Kinect

The prototype for Microsoft’s Kinect camera and microphone famously cost $30,000. At midnight Thursday morning, you’ll be able to buy it for $150 as an Xbox 360 peripheral.

Microsoft is projecting that it will sell 5 million units between now and Christmas. We’ll have more details and a review of the system soon, but for now it’s worth taking some time to think about how it all works.

Camera

Kinect’s camera is powered by both hardware and software. And it does two things: generate a three-dimensional (moving) image of the objects in its field of view, and recognize (moving) human beings among those objects.

Older software programs used differences in color and texture to distinguish objects from their backgrounds. PrimeSense, the company whose tech powers Kinect, and recent Microsoft acquisition Canesta use a different model. The camera transmits invisible near-infrared light and measures its “time of flight” after it reflects off the objects.

Time-of-flight works like sonar: If you know how long the light takes to return, you know how far away an object is. Cast a big field, with lots of pings going back and forth at the speed of light, and you can know how far away a lot of objects are.

Using an infrared generator also partially solves the problem of ambient light. Since the sensor isn’t designed to register visible light, it doesn’t get quite as many false positives.

PrimeSense and Kinect go one step further and encode information in the near-IR light. As that information is returned, some of it is deformed — which in turn can help generate a finer image of those objects’ 3-D texture, not just their depth.

With this tech, Kinect can distinguish objects’ depth within 1 centimeter and their height and width within 3 mm.

Story continues …

Figure from PrimeSense Explaining the PrimeSensor Reference Design. 

 

Middleware

At this point, both the Kinect’s hardware — its camera and IR-light projector — and its firmware (sometimes called “middleware”) are operating. The Kinect has an on-board processor which is using algorithms to process the data to render the three-dimensional image.

The middleware also can recognize people: distinguishing human body parts, joints and movements, as well as distinguishing individual human faces from one another. When you step in front of it, the camera “knows” who you are.

Does it “know” you in the sense of embodied neurons firing, or the way your mother knows your personality or your confessor knows your soul? Of course not. It’s a videogame.

But it’s a pretty remarkable videogame. You can’t quite get the fine detail of a table tennis slice, but the first iteration of the WiiMote couldn’t get that either. And all the jury-rigged foot pads and nunchuks strapped to thighs can’t capture whole-body running or dancing like Kinect can.

That’s where the Xbox’s processor comes in: translating the movements captured by the Kinect camera into meaningful on-screen events. These are context-specific. If a river-rafting game requires jumping and leaning, it’s going to look for jumping and leaning. If navigating a Netflix “Watch Instantly” menu requires horizontal and vertical hand-waving, that’s what will register on the screen.

It has an easier time recognizing some gestures and postures than others. As Kotaku noted this summer, recognizing human movement — at least, any movement more subtle than a hand-wave — is easier to do when someone is standing up (with all of their joints articulated) than sitting down.

So you can move your arms to navigate menus, watch TV and movies, or browse the internet. You can’t sit on the couch wiggling your thumbs and pretending you’re playing Street Fighter II. It’s not a magic trick cooked up by MI-6. It’s a camera that costs $150.

Audio

Kinect also has a stereo microphone to enable chat and voice commands. The tech on the audio capture is fairly well-known, but it’s worth observing that unlike the noise-canceling microphone you might have on your smartphone or laptop’s webcam, Kinect has a wide-field, conic audio capture.

This is because, unlike a smartphone, you wouldn’t want the Kinect’s microphone to capture only sounds close to it: It’d only pick up the sound of the television set. You want it to capture ambient speech throughout the room, such as that emitted by whole groups of people watching sports or playing games.

Screenshot from Kinect Sports Hurdles 

A traditional videogame controller is individual and serial: It’s me and whatever I’m controlling on the screen versus you and what you’re controlling. We might play cooperatively, but we’re basically discrete entities isolated from one another, manipulating objects in our hands.

A videogame controller is also a highly specialized device. It might do light work as a remote control, but the buttons, d-pads, joysticks, accelerometers, gyroscopes, haptic feedback mechanisms and interface with the console are all designed to communicate very specific kinds of information.

Kinect is something different. It’s communal, continuous and general: a Natural User Interface (or NUI) for multimedia, rather than a GUI for gaming.

But it takes a lot of tech to make an interface like that come together seamlessly and “naturally.”

http://bit.ly/anzdBD

 
1 Comment

Posted by on November 10, 2010 in Technology

 
 
%d bloggers like this: