Apr 30, 2014

At Photonics West SCHOTT introduces coatings with high laser damage threshold

The international technology group and specialty glass expert SCHOTT is now offering a wide range of coated materials for laser applications. Thanks to improvements of the coating capabilities at its site in Yverdon-les-Bains, Switzerland, the company is able to offer improved coated active glass devices as well as passive components made from materials such as SCHOTT N-BK7® and FK5 optical glasses, as well as fused silica, filter glass, and ZERODUR® glass-ceramic. Now, a coating is available with a higher laser damage threshold, which is relevant for the development of next-generation laser technology in various fields. SCHOTT is displaying its products at Photonics West (Booth #1601).
SCHOTT delivers laser glass coated with a high laser damage threshold, which is relevant for the development of next-generation laser technology in various fields. Photo: SCHOTT.
“We have been working diligently to make our laser components fit for use in large, demanding projects and a wide range of applications,” says Todd Jaeger, Sales Manager for Advanced Optics at SCHOTT North America, Inc. New SPIE test results confirm the improved performance of these components. One of the decisive factors is the laser damage threshold which must be high enough to allow for successful high-power applications that rely on a very high repetition rate of high-energy beams in a very short period of time. SCHOTT’s excellent material processing and polishing skills as well as its improved high quality dielectric coating technology make the materials the most resistant against laser induced damage. Furthermore new optical coating designs allow for broadband high reflection and dispersion control needed for ultrashort pulse, high-power applications. “While laser glass is produced and post-processed at our site in Duryea, Pennsylvania, the coating steps are carried out at our facility in Switzerland”, said Jaeger.
Improved material for use in new rangefinder generation
Also highlighted will be SCHOTT’s LG940 glass, ideal for "eye-safe" laser transmitters used in defense and industrial applications such as laser rangefinders and laser target designation systems. LG940, operating near the 1.5um wavelength, has increased strength, and excellent laser and optical properties for use at higher repetition rates than previous products. For the warfighter, SCHOTT LG940 enables the widespread production of novel laser amplifier designs for compact, low power, reliable rangefinders for soldier systems, ground and air vehicles with reduced weight and system costs.
In addition, SCHOTT expanded its portfolio with the ‘eye-safe’ phosphate laser glass LG940, which works in a short pulse regime for use in medical applications such as cosmetic laser treatments in dermatology. New applications in ophthalmic optics are also now possible.
Wide range of passive materials for laser applications
SCHOTT produces a range of components for improving lasing efficiencies and powers. These components such as mirrors, polarizers, and beam splitters are made of passive materials such as fused silica, optical glass (SCHOTT N-BK7®, FK5), filter glass or ZERODUR® glass-ceramic. All materials offer extremely high accuracy and cater to demands for the highest quality. Passive laser glass can, for example, be used as laser pumping cavity filters, which absorb undesired pumping light in the UV range, preventing solarization of the laser glass.


Source codes

Apr 29, 2014

Fourier Ptychographic Microscopy: A Gigapixel Superscope for Biomedicine

Standard microscopes are fussy creatures. They require constant adjustments to bring a sample into focus. To see a feature with a higher resolution, you must switch to a different microscope objective, with a reduced field of view. To get images without significant color or chromatic distortion, be prepared to pay for more expensive, high-quality objectives—remarkable feats of engineering that precisely pack in numerous lens elements to cancel each other’s aberrations.
All of these limitations, and more, are intrinsic to physical lenses. The perfect lens that we draw in high school ray diagrams simply does not exist in the physical world. Microscopy particularly highlights these limitations, because of the contortions light rays must go through within the microscope.


Until now, getting a better performance from a standard microscope would involve engineering the microscope to minimize the aberrations. Yet even with the best physical techniques, the number of resolvable pixels within a microscope’s field of view is still not much more than 10 megapixels. You can have a large field of view and a poor resolution, or a small field of view and a good resolution—but not both.
It turns out, however, that this inherent limitation can be overcome not physically, but computationally—by numerically transforming a poor-quality standard microscope into an “optically perfect,” aberration-free scope with gigapixel resolution. We call this computational approach Fourier ptychographic microscopy (FPM).

The FPM concept

Optical engineers use an imaging system’s space-bandwidth product (SBP) to characterize its total resolvable pixels. The SBPs of most off-the-shelf objective lenses are on the order of 10 megapixels, regardless of their magnification factors or numerical apertures (N.A.s). A standard 20× microscope objective lens has a resolution of 0.8 µm and a field of view 1.1 mm in diameter, corresponding to a SBP of approximately 8 megapixels.
Given that limitation, how can we design a microscope platform with a gigapixel SBP? We could, of course, simply scale up the lens size to increase the SBP—but as the size of a lens increases, so do its associated geometrical aberrations. That, in turn, requires the introduction of more optical surfaces to increase the degrees of freedom in lens optimization. The result: a lens design that’s expensive to produce, difficult to align and impractical for a conventional microscope platform.
FPM tackles the problem from another perspective—computational optics. Specifically, FPM brings together two innovations to bypass the SBP barrier:
Phase retrieval. Light detectors such as charge-coupled devices (CCDs) and photographic plates measure only intensity variations of the light that hits them; in the process of recording they lose the phase information, which characterizes how much the light is delayed through propagation. The phase retrieval algorithm, originally developed for electron imaging, computationally recovers this lost phase information from two or more distinct intensity measurements. It typically consists of iteratively reinforcing these known intensities while an initially random phase “guess” is allowed to converge to a solution that matches all measurements.
Aperture synthesis. Originally developed for radio astronomy by Martin Ryle, aperture synthesis aims at bypassing the resolution limit of a single radio telescope by combining images from a collection of telescopes. This expands the single telescope’s limited Fourier pass-band, thus improving the achievable resolution.
By integrating these two innovations, FPM uses a unique data fusion algorithm to recover a high-resolution, high-SBP sample image. This image contains both the intensity and phase information of the sample—a complete picture of the entire light field

Setting up a superscope

The physical FPM setup is simple: an array of LEDs is placed beneath a conventional microscope with a low-N.A. objective lens. The LED array successively illuminates the sample from multiple angles. At each illumination angle, FPM records a low-resolution intensity image through the low-N.A. objective lens.
The objective’s optical-transfer function imposes a well-defined constraint in the Fourier domain. This constraint is digitally panned across the Fourier space to reflect the angular variation of its illumination. After phase retrieval, FPM recovers a high-resolution complex sample solution by alternately constraining the amplitude to match the acquired low-resolution image sequence and the spectrum to match the panning Fourier constraint. The imposed panning Fourier constraint also enables expansion of the Fourier pass-band following principles set forth with aperture synthesis.
The largest incident angle of the LED array determines the final resolution of the FPM reconstruction. Thus FPM can bypass the design conflicts of conventional microscopes and simultaneously achieve both high-resolution and wide-field-of-view imaging.
Intuitively, the amplitude and phase profile of the light field emerging from the sample serves as the unknown “ground truth.” Each low-resolution measurement represents an attenuated-intensity observation of this ground truth. As long as we know the optical-transfer function that links each measurement to the ground truth, the FPM algorithm is able to iteratively generate improved “guesses” of the ground truth. To make sure that the final guess is accurate, and that the algorithm converges successfully, the amount of information collected must exceed the amount of information associated with the ground truth.
As FPM’s name implies, the process of collecting redundant information of a sample under different types of illumination is inspired by ptychography, a lensless imaging technique originally proposed within the electron microscope community and extended by H.M.L. Faulkner and J.M. Rodenburg. Unlike classical ptychography, however, FPM uses angle-varied illuminations and does not involve any moving parts. The use of lens elements in FPM settings also provides a higher signal-to-noise ratio and reduces the coherence requirement of the light beams. These characteristics make FPM potentially ideal for high-sensitivity imaging applications.
In addition to boosting spatial resolution, FPM’s use of redundant data can push conventional microscopes past two other critical limitations. First, FPM can automatically correct for any inherent optical aberrations that still deteriorate the quality of each image. And, second, the microscope’s depth of focus can be digitally extended beyond the physical limitations of the employed optics.

figure
A simple FPM setup (left image) allows capture of the full field of view of a 2× objective (top center) with the resolution of a 20× objective. [Adapted from Zheng et al., Nat. Photon. 7, 739 (2013).]

Wide field + high resolution + long depth of focus

In high-throughput biomedical applications, the conflict between the microscope’s resolution and its field of view has been a long-standing bottleneck. A common solution in industry comes from robotics: a high-N.A. objective lens is attached to a robotic scanner to mechanically scan and acquire multiple high-resolution sample images. The acquired images are then stitched together in the spatial domain to expand the field of view. These robotic platforms, however—which commonly require precise actuation controls, accurate optical alignment, precise motion tracking and a high level of maintenance—don’t exist in resource-limited environments, and a trained technician may be needed to review the sample manually.
FPM, by contrast, can create wide-field-of-view, high-resolution microscopic images without any moving parts, and with no significant hardware modification for most existing microscope platforms. In our prototype setup, we used a 2×, 0.08 N.A. objective lens, which provides an intrinsic resolution of 4 µm and a field of view of 13 mm in diameter, corresponding to a SBP of ~30 megapixels. FPM acquisition and post-processing with this microscope improved its resolution to 0.78 µm, while retaining its 13 mm diameter field of view. As such, the SBP of the FPM prototype approached 1 gigapixel, at least 25 times higher than that of the original system.
Another limitation of high-resolution microscopy is the short imaging depth of the associated objective lens. The depth of focus of a 40×, 0.75 N.A. objective, for example, is about 0.5 µm. Acquiring an image with such a high-N.A. objective requires placing the sample exactly at the focal position of the microscope platform; otherwise, the final image won’t resolve any detailed information. Unfortunately, most practical samples—such as substrates for many biological specimens—are not 100 percent flat. That means a challenge for wide-field, high-resolution imaging: researchers need to constantly adjust the stage to bring the sample into focus when moving across different lateral positions for creating a wide-field-of-view image.
FPM tackles this challenge by using digital refocusing in the recovery procedure. A phase factor is introduced in the objective’s pupil function to correct for the sample defocus. This simple correction enables FPM to extend the depth of focus to 0.3 mm, two orders of magnitude longer than the conventional platform with a similar N.A. To recover an all-in-focus wide-field image using FPM, the entire image is divided into many small segments, which can then individually can be digitally refocused and stitched together afterwards to form an all-in-focus image.

figure
A phase factor introduced into the objective’s pupil function can correct for the sample defocus—a simple step that enables FPM to extend the depth of focus to two orders of magnitude longer than that of a conventional microscope with a similar N.A. [Adapted from Zheng et al., Nat. Photon. 7, 739 (2013).]
The combination of wide field of view, high resolution and long depth of focus promises substantial gains in a variety of biomedical applications—from digital pathology, hematology and cell culture analysis to neuroanatomy, microarray technology and immunohistochemistry. For example, FPM can image a wide field that covers most of the area of a typical pathology slide, yet provides fine details to the level of cellular structure. This technique may potentially free pathologists from hours bent in front of the microscope, manually moving the sample to different regions. Introducing digital imaging into clinical environments could allow FPM to be integrated with other image-processing algorithms for computer-aided diagnostics.

Quantitative phase imaging

The phase retrieval process of FPM recovers both the amplitude and phase of the optical field exiting a sample—and, by comparing the computationally reconstructed phase with other direct phase-imaging techniques, we have shown separately that the phase information from FPM is quantitative. In addition to extending depth of focus and removing aberrations (as described earlier), this quantitative phase information can help solve another perennial problem in working with biomedical specimens: identifying properties of weakly scattering and highly transmissive material, such as live cells.
The major challenge of generating intrinsic contrast from biological specimens is that they generally do not absorb or scatter light significantly—i.e., they are mostly transparent under a microscope. Because of differences in the refractive indices of various biological structures, however, light going through different parts of a sample is differentially delayed, which influences the light’s phase. Phase contrast microscopy, developed in the 1930s, enhanced image contrast by converting phase information into intensity values, and allowed for significant advances in intrinsic-contrast imaging that revealed the inner details of transparent specimens without the use of staining or tagging.
The phase contrast microscope couples phase to intensity in a nonlinear fashion, however, and that makes quantitative analysis challenging. Numerous quantitative phase imaging techniques have since been developed, but most of them involve the use of a high-coherence laser source, which suffers from speckle artifacts that limit image contrast. FPM, on the other hand, uses a low coherence LED at a varied angle as a light source, producing images with minimal speckle and more spatially uniform illumination.

figure
FPM obtains quantitative phase information of the sample and produces images with minimal speckle, without any extra cost or change to the microscope setup. [Adapted from Ou et al., Opt. Lett. 38, 4845 (2013).]
FPM’s quantitative phase images could hold substantial promise in the clinic. For example, while the average phase shifts in tumor tissues and normal tissues have similar values, the detailed statistics of the spatial fluctuations are completely different—the phase shift values are more spatially disordered and have a higher variance for tumor tissues than for normal tissues. FPM’s quantitative and speckle-free phase imaging can measure these spatial fluctuations precisely and can be used to identify potential tumors, leading to a novel, label-free and quantitative approach to cancer diagnosis.
Another promising application for phase imaging is numerical simulations of other functional microscopy methods like differential interference contrast (DIC), phase contrast and dark field. Because FPM obtains the quantitative phase information, such simulations can be done without any extra cost or change to the microscope setup.
FPM also has some limitations. The acquisition speed of the FPM prototype is currently limited by the low light intensity of the LED array; in particular, the light-delivering efficiency is lower than 20 percent for the LED elements at the edge, corresponding to large incident angles. Using a high-power LED array or angling the LEDs toward the sample can address this limitation. Furthermore, the current processing time for generating a full-field-of-view image is longer than 10 minutes using a laptop; a graphic processing unit could speed this time up, as the FPM algorithm is highly parallelizable. Finally, FPM is not a fluorescence technique, as the fluorescent emission profile would remain unchanged under angle-varied illuminations.

Beyond the optical microscope

The specific FPM examples shown here all depend on an external light source array. Changing the angle of sample illumination, however, is not the only way to modulate a sample’s spectrum within the Fourier domain. Alternative setups that don’t require an LED array may achieve the same resolution enhancement, and could enable FPM to reach into nonvisible imaging domains, where such light source arrays are not readily available.
The electron microscope, for example, shares many direct parallels with visible-light microscopes, although it uses a completely different physics to form images. It was with electron waves that conventional ptychography, which forms a large part of the foundation of FPM, was first proposed.
As with FPM, conventional ptychography also illuminates a sample in different ways and records a series of intensity measurements. However, instead of varying the angle of illumination, it shifts around a confined electron beam focused directly upon a sample. (In practice, the sample itself is commonly shifted mechanically through a fixed illumination spot.) And, instead of imaging the sample directly, a lensless geometry is used to record the sample’s spectrum in the far field. A phase retrieval strategy—matching FPM, but instead operating in the spatial domain with knowledge of the translated sample’s different locations—is then used to recover a complex sample image from the series of recorded intensity spectra.
Several problems currently limit the function of conventional ptychography within the electron microscope. For example, laterally shifting either the sample or the probe beam introduces mechanical instabilities, which ultimately limit the technique’s final resolution. While directly implementing the previously discussed FPM procedure requires an array of individual electron sources, which may be prohibitively challenging, several other techniques can modulate the sample’s spectrum without requiring any moving parts.
One of the most promising such techniques is to simply use the magnetic deflection coils already included within many electron microscopes to shift the incident angle of the sample illumination field. Such voltage-controlled deflection is quite precise, although this type of modulation will still rely upon a thin-sample approximation.
X-ray imaging setups may likewise benefit from new experimental arrangements opened up by FPM. Over the past several years, coherent X-ray setups have already adopted conventional ptychography to achieve unprecedented resolutions. As with the electron microscope, there is no fundamental reason why modulation must occur within the sample’s spatial domain. For example, a Fourier-modulated X-ray geometry may be achieved with a rotatable Bragg grating placed in the sample’s far field. The grating can relay different spectrum-modulated images to a sensor via a Fresnel zone plate, and the FPM algorithm may remove any system aberrations to yield a sharp reconstruction of the sample’s intensity and phase.
In sum, the principles underlying FPM can extend beyond the computational microscope we have already demonstrated. Alternative scenarios outside the realm of visible optics may find a variety of benefits in specific applications, both within and outside of biomedicine.

Apr 17, 2014

High power laser sources at exotic wavelengths

High power laser sources at exotic wavelengths may be a step closer as researchers in China report a fibre optic parametric oscillator with record breaking efficiency. The research team believe this could lead to new light sources for a range of biomedical imaging applications.

For fibre lasers the most commonly available outputs are around 1.06, 1.5 or 2.0 µm. Developing materials that can provide gain in less conventional wavelength bands is difficult and it is easier to employ conversion from the wavelength of a more conventional laser source.
Fibre optical parametric amplifiers (FOPAs) can convert optical energy from conventional wavelengths and a fibre optical parametric oscillator (FOPO) based on the gain from a FOPA can generate tuneable radiation. Theoretically, FOPAs could provide optical gain at nearly any wavelength and so FOPOs could emit laser beams in nearly any wavelength. However, low conversion efficiencies limit the usefulness of FOPOs as light sources.


Read more at: http://phys.org/news/2014-04-high-power-laser-sources-exotic.html#jCp

Source address

Apr 16, 2014

Microsoft: Let us be your one stop for big data analytics

If there was any question that Microsoft wants to be everyone's data analytics shop, the company erased those doubts with its Monday presentation. In fact, Microsoft laid out plans for how the next iteration of many of its products -- SQL Server, Microsoft Azure, and its Hadoop-powered analytics components both inside and outside of Microsoft Azure -- are being designed as a single data platform.
Certainly, this was Microsoft's aim even before Satya Nadella replaced Steve Ballmer. Microsoft was linking SQL Server and Hadoop back in 2011, added connectivity for SQL Server with the Azure-hosted version of Hadoop in 2012, and expanded the ingestion, storage, and processing capacity of Azure along the way. But with Monday's presentation, Microsoft clarified how it all ties together.
The first and most obvious ingredient is SQL Server 2014, with new features like Azure connectivity and in-memory processing. The good stuff isn't mere hype; InfoWorld's Test Center looked at SQL Server 2014 and came away impressed by its performance gains, high-availability functionality, and backup-to-Azure options. But having SQL Server as the sole cornerstone for a big data or data analytics operation is a little like having a cargo plane as your only form of air travel, instead of a 767 for passengers or a twin-engine prop for quick getaways.
Small wonder, then, that SQL Server is getting complementary technology in the form of the Analytics Platform System (APS). It's close to the query-everything approach now used by Microsoft's Hadoop partner Hortonworks and by EMC/VMware Hadoop spin-off Pivotal. Each company boasts of the ability to tap into multiple data sources, including Hadoop.
APS, Microsoft's successor to its Parallel Data Warehouse product, does the same in reverse, pulling data in from Hadoop and more conventional sources. The details of how this works remain skimpy, but it's presented as an appliance (Microsoft calls it "big data in a box"). The company also hinted that APS can be used to take existing data confined within an organization, export it to Azure, process it there, and work with the results in tools like Excel.
Also in the mix is the ostentatiously named Microsoft Azure Intelligent Systems Service (ISS), "a cloud-based service to connect, manage, capture, and transform machine-generated data regardless of the operating system or platform." ISS runs on Azure -- a sure sign of where the ingested data is meant to be stored, analyzed, and processed. Again, Pivotal comes to mind as one of its boasting points was how data can be processed where it's already stored.
This last offering bears more than a passing resemblance to Amazon's Kinesis service, although Amazon's approach to data analytics is far more of a DIY kit, where the user is provided with various pieces (data ingestion, data storage, data analytics) that have to be cobbled together by hand. Microsoft envisions a situation where most of that heavy lifting has already been done, if only by dint of having all the products bear Microsoft's logo and sport a degree of native interconnectivity, and where existing Microsoft tools (the Hadoop-powered HDInsight and Power BI for Office 365, to name two examples) are used.
The cutesy formula Microsoft used to describe the different parts of its platform was "[data + analytics + people] @ speed." The "people" side of the equation once again includes the likes of connectivity to data through Microsoft Excel -- a smart way to keep part of the desktop edition of Microsoft Office relevant to business users. That's also a good metaphor for Microsoft's approach to the issue of big data: by having something to offer everyone somewhere in its stack -- and maybe being able to offer everything to someone.

Moving for new Windows

Leaked screenshots of the next set of features for Windows 8 were posted to the pirate site Wzor.net this past week, confirming that the next release (called by some a Feature Pack, not a Service Pack) will debut on April 7 or 8. The final name isn't available yet, but people are calling it Windows 8.2, Windows 8.1 Update 1 (which sounds odd), or Spring Update. My own name for it is NW9 (for "Not Windows 9"), but for this post, let's call it Windows 8.2.
This update is meant to address a variety of complaints by adding a few more Band-Aids for Windows 8's lack of mouse and keyboard support for nontouch PCs. Metro apps (aka Modern or Windows Store apps) will be available through the Windows Desktop's taskbar, and there will be standard Windows-style title bar options for splitting (aka snapping), minimizing, and closing Metro apps. (That's great because I'm pretty tired of having to grab an app in the middle at the top and pull downward to make it go bye-bye.)
[ Windows 8 left you blue? Then check out Windows Red, InfoWorld's plan to fix Microsoft's contested OS. | Stay atop key Microsoft technologies in our Technology: Microsoft newsletter. ]
Nontouch devices will have the Power options (shutdown, restart, and suspend) available from the Start screen. SkyDrive is renamed OneDrive to satisfy Microsoft's lawyers, who got beat up by British broadcaster SkyB over trademark-infringement claims. Internet Explorer 11 has an enterprise mode for Windows 7 business users whose apps are locked in to IE8.
As you can see, nothing monumental is happening here. That's led to more criticism against the often-criticized Windows 8. Microsoft's poor OS can't catch a break from its critics.
I'm not jumping on the critics' bandwagon. Yes, I know I've been beating up on Windows 8 since its very first public preview release, when I called it "Windows Frankenstein." I've even stated that I would prefer to skip Windows 8.2 and go directly to Windows 9 if at all possible.
But let's be reasonable for a moment. Windows 8 (and now 8.1) is out. Like many of you, I'm using it on all my systems, as well as supporting it on family members', friends', and clients' PCs. Even if Microsoft makes a few changes with each update, that's better than nothing. It's better than waiting a year for something entirely new, with no guarantee I'll like. I hope it doesn't work out that way, but we might hate Windows 9 even more than Windows 8! Although I have good reason to believe differently (otherwise, I'd be brushing up on my Linux right now), I'm glad to see some complaints addressed with Windows 8.2.
There is a time to complain and a time to hold off. Monumental changes are going on at Microsoft at the moment: A new CEO, Satya Nadellamajor executive swaps and shifts in other departments; and a new vision. I believe it's best to ride out the next year hoping for a better one, and to take what we can get in terms of improvements on an OS that has already been vilified worse than Vista (though I didn't think the villification was entirely deserved in Vista's case, I do with Windows 8).

Apr 12, 2014

Google to Offer Glass to All U.S. Residents for a Limited Tim


MOUNTAIN VIEW, Calif.—Google is about to expand the availability of Google Glass, its proprietary Internet-connected eyewear. 

In an announcement on the Google+ website yesterday, the company said that on Tuesday, April 15 at 9:00 am EDT, it will allow any adult U.S. resident to join its Glass Explorer program. So far, the company has limited enrollment in the program to a select number of applicants. Those wishing to become Explorers must pay $1,500 plus tax for Glass, the same price Google has previously been charging for the device. 

Google noted that the number of spots in the Explorer program is limited, but did not specify how many spots are available or how long it will continue to accept new applicants to the program. 

Google made a special presentation about Glass at Vision Monday’s Global Leadership Summit on March 26, and Summit attendees had an opportunity to try the device.

Apr 11, 2014

Jet pilots go ‘missing’, give Germans MH 370 scare

NEW DELHI: The act of two pilots of Jet Airways keeping their headphones away from them caused panic at German airports on March 13 as the air traffic control could not get in touch with them for a full 30 minutes. Since this snapping of contact with an aircraft came on March 13 -- five days after Malaysia Airlines MH 370 incident, — the German authorities got alarmed about the missing plane and heaved a sigh of relief only after the aircraft got in touch with them.

This scare happened when Jet's 9W-117, operated on a Boeing 777-300 extended range aircraft, was flying from London to Mumbai. While over German airspace, the two pilots of this aircraft removed their headsets but forgot to increase its volume so that they could hear any sound from it from a distance and respond to it. This led to a break in communication for almost 30 minutes.

The German ATC then got in touch with Jet Airways, which sent an SMS to the aircraft cockpit through aircraft communications addressing and reporting system (ACARS). This is a digital datalink system for transmission of short messages between aircraft and ground stations via airband radio or satellite.

After getting this message, the pilots realized their mistake and responded to the german ATC, while apologizing to them for the lapse.

But given the scare and the long duration, the German air traffic control DFS Deutsche Flugsicherung GmbH complained to the Indian directorate general of civil aviation (DGCA). The regulator conducted an inquiry in which the pilots were quizzed and they admitted removing their headsets. The pilots were grounded for two weeks.

Jet Airways also launched its own probe into this lapse and has sent a report to the German authorities.

Simultaneously, Jet Airways' Permanent Inquiry Board also inquired into the incident. Agency reports quoted a Jet statement saying: "Based on the investigation report, Jet Airways has ensured strict disciplinary action towards the concerned pilots. The report has been sent to the German authorities for closure. At Jet Airways, we endeavour to maintain the highest standards of safety for our guests, at all times."

Polling in India

High voter turnout was recorded in the 91 constituencies in 14 states and Union Territories which went to polls in the third and substantial phase of Lok Sabha elections, with Chandigarh witnessing the highest percentage of 74.
Chhattisgarh, Odisha and Bihar witnessed naxal related violence which left 2 CRP jawans and one state police personnel dead.
The voter turnout this time was substantially higher over the last Lok Sabha elections in all the constituencies.
Muzaffarnagar and Shamli in Uttar Pradesh, which witnessed comunal riots in August last year, recorded "above average" voter turnout of 67.78 per cent and 70.85 per cent, respectively, Deputy EC Vinod Zutshi told reporters here.
The 10 seats of Uttar Pradesh, which went to polls today, reported a record turnout of 65 per cent as compared to 51.30 per cent recorded in the last LS polls.
The turnout in Delhi was 64 per cent, up by 12 per cent as against 2009 elections.
Chandigarh constituency recorded the highest turnout of 74 per cent, against 64 per cent in 2009 polls.
Kerala, which went to polls in single phase today, recorded 73.4 per cent voter turnout, up from 73.2 per cent last time.
Chhattisgarh's Bastar seat witnessed the lowest voter turnout among the 91 seats of 51.4 per cent. But it was higher as compared to 47.33 per cent recorded in the last LS polls.
EC maintained that the turnout could be "much higher" in all the seats as final reports were yet to come in with voting still on after the stipulated hours in various areas.
In Odisha, maoists snatched EVMs and took away the battery of one voting machine.
EC has decided to postpone polling in 22 places in Bihar -- 19 in Jamui, two in Nawada and one in Gaya -- as polling personnel were not sent there keeping in mind their safety.
The fresh date of polls in these areas will be decided soon.
Jharkhand's four seats today witnessed 58 per cent turnout, which was higher than the 50.89 per cent recorded in last general elections.
Madhya Pradesh, where nine seats went to polls, saw an average turnout of 54.13 per cent, while in Haryana it was 65 per cent. All the 10 seats of Haryana had a single-phase poll.
In the last elections, the turnout in Haryana was 68 per cent. "The turnout could be 73 per cent in Haryana when we get the final figures," Deputy EC Alok Shukla said.
The Jammu seat also saw an impressive turnout of 66.29 per cent -- 17 per cent higher than in 2009.
The Andaman and Nicobar Islands also witnessed a record turnout. Till the last count, it was 67 per cent, higher than last time's 64.15 per cent. But in Lakshadweep, the turnout was 71.34 per cent, lower than last time's 85.98 per cent.

Apr 7, 2014

Molecular interactions from crude samples

A simple fluidic system makes it possible to analyze complex samples such as serum or cell lysates for diagnostic research or drug screening.
Our versatile biosensor can meet the demands of any kind of experimental design.
Primarily, to screen mouse IgG ant ibodies f rom crude samples, such as hybridoma supernat ant s for dissociation rates of bound antigen. In addition, to determine detailed kinetic rate constants  and affinity between the antigen and an antibody selected in the screen. 
Measuring the kinetic properties of biomolecules has become increasingly important in drug discovery and biomanufacturing. In areas such as immunization monitoring, clonal selection, and expression analysis, biosensors offer a clear advantage. In the past, it has been necessary to obtain purified or highly diluted molecules for such studies since crude samples have posed challenges for nonspecific binding, referencing, and system microfluidics.
The ability to analyze crude samples directly saves time, labor, and money. In this article, we show how the Attana200 system (Figure 1) can be used directly with crude samples for off-rate screening of antibodies in hybridoma supernatants containing serum as well as for scaffold proteins in crude E. coli lysates.
Attana’s biosensors are based on the quartz crystal microbalance (QCM) technology. By applying an alternating potential to a piezoelectric quartz crystal, the crystal can be controlled to oscillate at its resonance frequency. A change in mass on the crystal surface results in a proportional change in resonance frequency.
This means that when a ligand is initially immobilized on the crystal surface, the added mass can be measured in real-time without the need for labeling. The analyte is then injected, and binding of the analyte to the surface-bound ligand increases the mass further, whereupon a new shift in the resonance frequency is registered. Flowing buffer through the sensor chamber enables detection of the release of bound analyte.
Using different surface coatings provides the option of capturing or immobilizing molecules and, accordingly, QCM can be used to study molecular interactions in real time. The QCM technology enables not only the study of biomolecules of varying species such as proteins, nucleic acids, and carbohydrates, but also of vastly different sizes, ranging from peptides to cells.

Clonal Selection

Researchers often need to select clones for expression of a biomolecule either secreted from or accumulated inside cells. Hybridoma and phage display are the two predominant techniques used to create a repertoire of clones (Figure 2). 
The Attana 200 system can be used through the entire hybridoma screening process. It offers a better methodology than ELISA for monitoring immunization, detecting the earliest antibodies of weak affinity. It also offers a powerful tool for screening of important properties such as off-rate constants early in the process, cutting sample numbers, and saving time. In addition, it is also widely used for establishing titers, due to its low coefficient variation.
In downstream processes such as characterization it can be used to establish specificity, kinetics, active concentration, cross reactivity, and for performing epitope mapping. Similarly, the Attana 200 system is used in the phage-display process to check specificity and correct folding of proteins in E. coli lysates after panning.
After re-cloning into the most favorable expression system, the Attana 200 system can be used under biologically appropriate conditions such as physiological temperatures to ensure that more relevant clones are chosen for expansion early in the process.

Crude Quality Issues

Crude oil is a highly variant natural resource. The quality ranges are similar to coal and depending on thematuration of the crude the quality can be high or low (younger crude's are of lower quality). One of the first indications of quality is color. The variations in oil color can be dramatic, and very indicative of the quality of that crude. Not all crude oil is black - higher quality oils can be a golden or amber in color.
All the quality measures here are based on the ability to produce the desired products. In the U.S., about 50% of the oil is converted into gasoline. So an oil that produces a higher % of gasoline "cuts" is more desirable and have a higher quality oil. Take note, we have used much of the higher quality crude oil already! Now we need to use the lower quality oils too and the general trend is to use increasingly lower quality crude's. This quality reduction has an impact on how we refine the crude into the desirable product.

High-throughput surface plasmon resonance

Spot up to several hundreds of different molecules on the biochip to take advantage of the multiplexing capabilities for the rapid screening (>100 sensorgrams in parallel) of molecules.
The reactivity of many different species (ligands) submitted to the same environment can be compared with only one sample injection.
Optimization studies for biomolecular interaction analysis (immobilization concentration, pH…) are faster – Saving you time and consumables.

Technologies based on surface plasmon resonance (SPR) have allowed rapid, label-free characterization of protein-protein and protein-small molecule interactions. SPR has become the gold standard in industrial and academic settings, in which the interaction between a pair of soluble binding partners is characterized in detail or a library of molecules is screened for binding against a single soluble protein. In spite of these successes, SPR is only beginning to be adapted to the needs of membrane-bound proteins which are difficult to study in situ but represent promising targets for drug and biomarker development. Existing technologies, such as BIAcoreTM, have been adapted for membrane protein analysis by building supported lipid layers or capturing lipid vesicles on existing chips. Newer technologies, still in development, will allow membrane proteins to be presented in native or near-native formats. These include SPR nanopore arrays, in which lipid bilayers containing membrane proteins stably span small pores that are addressable from both sides of the bilayer. Here, we discuss current SPR instrumentation and the potential for SPR nanopore arrays to enable quantitative, high-throughput screening of G protein coupled receptor ligands and applications in basic cellular biology.


Label-free molecular interaction analysis

Surface Plasmon Resonance is an established tool in the life-science sectors. It offers a new generation of label-free biomolecular analysis, providing information on kinetic processes (association and dissociation), binding affinity, analyte concentration and real-time molecule detection.
A large variety of bio-interactions can be monitored, such as antibody/antigen, peptide/antibody, DNA/DNA, antibody/bacteria etc.