[Ongoing]
This project involves development of a framework which accepts low resolution images and outputs high resolution images
with enhanced image quality. The project involves implementation of the algorithm proposed by Glasner et al(2009):
Super-Resolution From a Single Image. The method attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.
[Project Homepage] [Jan-Mar'11] 1st Prize, Project Presentation(Communications & Networks),
APOGEE 2011.
Leveraging the concept of Cloud Robotics
, this project is aimed at constructing a bot completely controlled by a hand-held Android device.
Equipped with features such as: Self Navigation, Speech & Gesture Controls, the bot provides quick access
to the web via speech interface. The bot sends images of persons it meets to a remote PC
where face extraction and emotion recognition is performed and appropriate action is taken.
[Under Dr. AS Mandal, Head, Perception & Cognition Lab, CEERI, Pilani] [Aug-Dec'10]
The project involvedface detection using Viola Jones face extraction algorithm followed
by feature extraction using Gabor filters and subsequesnt training sing SVMs. Training was
done on a dataset consisting of Indian faces (the dataset was built at CEERI).
[Snapshot]
1st Prize Cyberfiesta'11, National Level Open Software Competition [Jan-May'10]
A mobile application to promote social networking between the users using Bluetooth.
An ad-hoc mobile social network is created using P2P communication.
Used the concept of UUID generation and toggling of device between master and slave
configuration to enable exchange of information between every pair of devices.
2nd Prize Project Presentation (Mathematical Modelling)
APOGEE 2011
[Jan-May'11]
Implemented agent-based modeling in Netlogo environment (java based),
taking into consideration human psychology to model individual and collective behavior.
[Self-Motivated] [Aug-Dec'09]
Involved hand segmentation and finger-tip detection(without the use of color markers)
using a webcam based on image-processing algorithms and interfacing the computer with a
5-degree robotic arm. The robotic arm was subsequently controlled by motion hand gestures of the user.
[Snapshot1] [Snapshot2]