Techbrink

Sony’s secret weapon: the 3 layer CMOS sensor

Sony’s secret weapon: the 3 layer CMOS sensor

Image Credit: Updatesville.com

Though Sony is perceived to be losing grounds in the smartphone industry, to some extent, it is not. It is true that Sony’s phones are not as popular as they were in couple of years ago. Yet, the Japanese colossus has been dramatically influencing the tech. world in other ways. Being the leader of CMOS sensor industry, by leveraging its cutting edge technologies, sensors made by Sony have great traction towards mainstream high end smartphone players, like Apple and Samsung. In order to maintain its dominance in the field, recently, Sony introduced the world’s first CMOS sensor with 3 layers.

First of all, it is crucial to understand the basic working principal of a CMOS image sensor. CMOS sensors are used in camera to convert and process signals. Like what its name suggests, a sensor will react to and collect light signals, which are analog in form, into electric signals, which are digital in form. For a CMOS sensor used on phones, it has a photodetector, a special kind of diode that converts light into current. Moreover, each pixel is connected with a transistor, aiming to amplify the signals. The good side of using transistors is that the power consumption of CMOS sensor is much reduced, when compared with those using CCD technology, another mainstream solution of image sensors. Of course, adoption of transistors is not entirely perfect, like photons are lost prior to the conversion process. These technical problems are solved using other measures, for instance, using CMOS sensor with back illumination.

So, what makes the 3 layer design special? Traditionally, there are only 2 layers, the part with array of pixels and the part where integrated circuit is located. Sony suggested that a DRAM layer can be added between the two layers, creating a sandwich-like design. DRAM is a kind of random access memory that allows data to be temporarily stored and read at high speed, thus improving efficiency. It is said that through adding DRAM in between, data can be sent from the pixel part to the IC, the region that processing work takes place, at optimal speed.

Therefore, in perspective of a normal consumer, how can our experience of using cameras be improved? Well, as we are in a world which deems taking selfies more and more important to us, it is not uncommon that we will take lots of photos and attempt to pick the best one. With the improved image capturing speed due to addition of DRAM layer, taking selfie will be much more fluent. Moreover, as capturing speed is increased, more frames are recorded per unit time, hence allowing us to create slow motion effect at better quality. To make a conclusion, the invention of the groundbreaking 3 layer CMOS sensor is another good example of how our life is changed by disruptive technologies, proving that these state-of-the-art engineering ideas are actually making our daily life better, from one little step to another.

 

Snap: something apart from its valuation

Snap: something apart from its valuation

One of the biggest events of the week is definitely the IPO of Snap, the mother company of Snapchat, the mobile application that went viral in the world of teens, share price of the corporation skyrocketed, with an astonishing 50% increase in market value within one trading day. The hysteria of the market has extended to the second trading day, causing Snap to experience another 10% surge in price. The berserk hike in market value has raised concerns about the true underlying value of the company, with famous financial columnists arguing that Snap doesn’t worth that much. Anyhow, the aim of this article isn’t to judge whether Snap deserves such a high price-to-sales ratio, but rather attempting to view the company in a rather different perspective, comparing with commentary made that only focus on the performance of Snap Inc in the stock market.

So, before expressing my opinions, I would love to briefly mention how Snap Inc position themselves. Apparently, the change of the corporation’s name implies thoughts of the management of the company: from Snapchat to simply Snap, the founder of Snap deems the tech giant to be a futuristic camera company. I do agree that Snap should proceed transforming themselves into an authentic camera company. After all, the only hardware product launched is Spectacles. Though the fancy sunglasses with embedded tiny camera is priced at $130, according to the prospectus of Snap, the project is in fact under financial deficit. In my point of view, being a software company specialized in developing one of the best instant messaging mobile applications, the company doesn’t have much experts that possess the ability of creating great hardware products, meanwhile, considering the fabrication cost of their new products. With the slowing growth of active users, predominantly because Instagram and Whatsapp have been mimicking the features of Snapchat, one of the ways to expand the business is to enter the camera market.

Of course, there is always another method to increase revenue. The operating cost of Snap Inc is expected to rise in the near future, since Snapchat has to utilize services offered by cloud solution providers like Google or Amazon, so as to meet the demand of their users’ massive number of daily data transfer. In order to cover the cost and try to be a profitable company, Snap may try to duplicate what Twitter has done. According to Twitter’s shareholder letter, they have a segment named “Data licensing and other activities”. By selling users’ data, like how they use Snapchat, to other companies, this may potentially provide a steady source of return to Snap Inc. In fact, I believe the management realized this opportunity. In November 2015, Snap announced an update in privacy policy, proclaiming that they have rights to use users’ data, including those pictures you have uploaded or even deleted. Though the act outraged some users, it opens the door of a lucrative business for the company. In the long run, Snap should consider various ways to diversify its business. Perhaps, the young corporation will once again shock most of us with innovative ideas, just like how they have invented Snapchat.

EUVL – Key of Fabrication Node Warfare

EUVL – Key of Fabrication Node Warfare

From the viewpoint of consumers, we have been demanding for thinner, lighter smartphones with better performance. Meanwhile, to fabless firms or semiconductor foundries, as we have entered the post-Moore’s-Law era, it is extremely onerous even for professional players of the field, like TSMC or Intel, to design or manufacture chips at smaller fabrication nodes, so as to fulfill the wishes of the mass. As we keep on advancing, traditional lithographic methods such as immersion lithography are no longer applicable. The reason behind is pretty simple – as size of chips is largely reduced, in order to maintain the resolution of lithography, wavelength of light beam used for lithography needs to be simultaneously reduced. Luckily, we have invented technologies to replace methods that we used to adopt, for instance, EUVL.

Extreme Ultra Violet Lithography, or EUVL in short form, is one of the members of the next-generation lithography family. The technology leverages EUV radiation with wavelength of merely 13.5 nm. The high power beams can be both naturally or artificially produced. For nanoscale industries, a source is required to produce EUV radiations. For example, Cymer, a leading UV light source solution provider, utilizes lasers driven by CO2 to produce EUV. Initially, the lasers are directed to a tunnel-like object named beam transport system (BTS). The BTS then sends lasers to a set of focusing lens, which is located in a large box known as the source plasma vessel (SPV), through total internal reflection. At SPV, a device called droplet generator will expel liquified tin droplets with high purity at an astonishing velocity of 70 meters per second and a frequency level of 50000 Hz. Eventually, the brutal collision amid tin atoms and laser beams causes tin atoms to become positive tin ions, which instantaneously produces free electrons. Due to strong polar attractions between free electrons and ions, electrons will accelerate while they are attracted to return to their parent ions, releasing photons with much higher energy. This is how EUV radiations, the beams for EUVL, are produced. The entire process seems to be outrageously complex, yet it literally occurs within a second.

As Cymer has managed to create EUV light sources, ASML, the world’s largest lithography system supplier, purchased Cymer in 2015. The acquisition is plainly synergic to both companies, as they partnered with each other to form one of the strongest alliance in the industry. Collectively, ASML built the first EUVL platform for volume production, namely the TWINSCAN NXE platform. The introduction of the advanced system immediately attracts Intel, Samsung and TSMC, prompting these three dominating semiconductor foundries to purchase and configure the EUVL system, so as to be ahead of the other two major competitors. Due to the lead time required, even though the systems are intended to be used in 2018, the giants have ordered the systems in 2015 or 2016, propelling the robust revenue growth of ASML. As long as the warfare of producing chips of smaller sizes proceeds, the prospect of both ASML and EUVL remains bright. Let’s see how this cutting-edge technology can bring us to a new page in the lithography industry.  

The Story of Filters in the World of 5G

The Story of Filters in the World of 5G

As more and more equipment are working under standards based on ultra-high frequencies (UHF), which according to definitions of ITU, imply a band that extends from 300MHz to 3GHz, the radio spectrum has been incrementally congested. In order to alleviate the problem of slow connection, gurus are eager to develop new specifications for the next generation of wireless communication systems, incubating the idea of 5G.

In reference to the requirements set by NGMN (The Next Generation Mobile Networks), 5G devices ought to be able to attain extremely high data rates, low latency and improved coverage. Through leveraging wide range of bands, including those with frequencies beyond 10 GHz, it is estimated that data can be rapidly transferred. An analogue of transition from 4G to 5G is a highway with more lanes. As the number of channels for interchange of data surges, assuming amount of data is kept constant, we should witness improvements in data rate.

However, the biggest problem encountered is the short wavelengths of extremely high frequency (EHF) waves. According to the simple physics equation v=fλ, with the velocity of electromagnetic wave fixed at 3 x 10^(8) m s^(−1), the higher the frequency, the lower the wavelength. As frequencies continue to increase, diffraction of waves becomes insignificant, meaning that a single base station’s network coverage will be massively dwindled as frequency rises. The implication behind is that much more base stations are required, which directly drags the demand for advanced filters that are capable of delivering excellent performance.

Under the above mentioned arduous circumstances, it is of no doubt that only a pinch of RF solution providers with state-of-the-art technologies can embrace the opportunities arose from emergence of 5G concepts. In my opinion, the key content growth driver for companies will be leading BAW filtering techniques.

To readers who have been closely tracking the development process of filters, you might have heard of SAW (Surface Acoustic Wave) and BAW (Bulk Acoustic Wave) filters. Both are passive filters that utilize piezoelectric substrate and interdigital transducers (IDTs), a special kind of transducers that transform acoustic waves into digital signals or vice versa, so as to remove unwanted waves while keeping the waves that lie in passband. For SAW filters, like how its name suggested, acoustic waves travel through the surface of the substrate, unlike BAW filters, which store acoustic waves inside the substrate.

Due to the variance in working principle, it comes to no one’s surprise that the two filters serve for different purposes. In fact, SAW filters are better for frequencies below 1.5 GHz. But as the frequency rises, for instance, at the level beyond 2.5 GHz, SAW filters start drifting much more significantly than BAW filters, meaning that performance is largely affected by temperature change. As we begin using EHF band, BAW filters plainly outperform SAW filters. Technically, BAW filters are known to have higher quality factor, a parameter that exhibits the relationship between center frequency and bandwidth, meaning that there is lower energy loss throughout the process of filtering. Above that, the sharper filter skirts, as reflected by the value of Q factor, causes the rejection curve of the filter to be steeper. This feature enables BAW filters to amplify desired waves more efficiently. Last but not least, the low insertion loss also implies that BAW filters perform excellently even under conditions with poor signals.

Back into the topic that we initiated at first, by refining the design of BAW filters, companies with such kind of technologies will have their golden age, as demand for the BAW components will skyrocket. At the same time, speaking of competition, unlike smartphone industry with numerous corporations, BAW filter industry is predominantly occupied by Avago Technologies and TriQuint, a subsidiary of Qorvo. The outrageously high complexity in developing related technologies has made new comers difficult to win over the existing companies, unless it is heavily funded and is willing to make substantial investments. Otherwise, the oligopoly is quite likely to sustain.

 

Smart Home: What We Need and Who Will Win?

Smart Home: What We Need and Who Will Win?

Ever since the smart home concept is introduced, besides those powerful IoT merchandises, the most eye-catching products will definitely be the smart home devices created by big names in the tech industry.  Centralized control centres, like Amazon Echo, seem to become the building blocks of an ideal futuristic home.  Yet, is this what we need?  And to put this further, will these products dominate the market?

They are Great, just Not the Greatest
First, undeniably, voice-activated smart home devices that we have previously mentioned are fabulous, both in design and performance.  These lovely machines give quick and accurate response to the users, thanks to companies’ great investments in voice recognition and natural language processing technology. 

Besides user experience, from an industry’s perspective, smart home market key participants like Amazon and Google have the ability to build their own alliance, so as to ensure there is a large number of home appliances compatible with their products.  Therefore, these so called “static smart home devices” are extremely competitive, meaning that they will very likely be the winner who takes the lion’s share of the market, unless “dynamic smart home devices” enter the game and impress everyone.

Static versus Dynamic: Clarifying their Definitions
To prevent confusion, static doesn’t simply mean an object that lacks mobility.  A static device gives out response solely depends on data and algorithms.  In another words, static machines are completely rational, which makes sense as for most silicon-based inventions, they often don’t forge a close relationship with us. 

However, we are now living in an extraordinary era.  What seems to be impossible is becoming real.  We believe the future belongs to dynamic smart home devices, as they are capable of understanding something treasured in human society: emotion.  Dynamic devices won’t just judge your instructions based on what your words, but also your tone.  When you shout angrily since your country just lost in the Olympic Games, the dynamic machine may soothe you, telling you that losing doesn’t mean failure.  Above that, dynamic devices will actively interact with you, just like how you would treat your friends.  In some sense, they have become true living machines.

techbrink3
Credit: Héctor García on Flickr

Like your Friend, They are Different
In fact, dynamic devices have already appeared, Pepper, the robot created by Softbank should be the first commercial product that has emotion.  The prototype released in 2014 is a great leap in the robotic industry, as Pepper is endowed with formidable calculation power and unique characteristics.  With the SDK provided by IBM’s Watson team, developers can enable the android to learn through a technology process called cognitive learning.  Beyond that, Softbank designed another part of the brain, which accounts for the emotional activities.  Pepper is then given with soul, transforming from a powerful computer into a lively, endearing friend that we all love and care.

How to be a Winner?
Though Pepper already exhibits some basic feelings, the generation of emotion isn’t well developed yet.  For instance, although Pepper would respond to you when pat his head, he won’t actively interact with you, this makes Pepper a semi-dynamic device.  Furthermore, Pepper isn’t supported by most applications, which limits the growth of the product. 

The ultimate winner will be the company that develops a perfect dynamic device: an intellectual companion that understands how you feel, one that provides useful ideas for you to solve problems and talk to you whenever you need a friend.  As the population is aging with lower birth rate and the spread of individualism, we lack a partner to share our views and relieve our pressure.  Perhaps, what we truly need is a friend, a real friend that is always on your side.

 

Will Artificial Intelligence Eventually Replace Humans? (Part B)

Will Artificial Intelligence Eventually Replace Humans? (Part B)

At the very first place, we will have to admit the truth that we are not winning over robots while completing a specific task.  This is inevitable as they can almost unlimitedly upgrade themselves, while practicing when we are all sleeping.  Yet, does this mean we humans are losing all the time?

Imagination of AI
We often hear gurus say that robots lack the ability to imagine.  Indeed, even artificial intelligence is manipulated by the command set and programs.  AI only has the ability to partially change their working mechanism, like leveraging
neural network to improve its performance, which cannot be done by traditional programs. Yet, AlphaGo still cannot turn itself into a program that is able to do things other than playing Go. Therefore, something fundamental is still kept constant and cannot be edited by AI itself.  Their potential seems to be limited by its designed purpose.  An easy conclusion seems to be drawn: AI just can’t imagine like us.  But is this statement true?

From a philosophical perspective, what we thought to be imagined by us is in fact just a mix of existed concepts.  In other words, imagination isn’t creating something from zero.  This contradicts with our common cognition, as we often define “imagining” as an activity that is able to create things that are completely new.  For instance, we all haven’t seen a unicorn.  So unicorn seems to be a product of human imagination.  Yet, if we think deeply, a unicorn is just a mix of a body of a horse, horn and perhaps a pair of wings.  We might not have a chance to see a unicorn, but we do have seen the three components that make up our imagined animal.  The same mindset can be applied to everything that we have created. And as long as something is generated according to a regular pattern, it implies a chance that AI can learn it through practicing.

We can imagine that a future machine with much more enhanced processing power is pairing up thousands of object, while thinking about the effect of merging a set of things together to become a new product.  The working principle will be letting the AI system to have a comprehensive understanding of a basket of objects, including the physical properties, designed purpose of the object etc.  The system then automatically starts calculating the probability of combining several parts to achieve a particular goal. Imagination power can therefore be possessed by AI, or at least, being imitated by AI. 

This shocking result can be disastrous to us, as people that seem to be unable to be replaced by AI, like creative work producers, will now face unemployment.  In fact, we are already experiencing the power of so called creative programs.  Google, being the pioneer in AI, has demonstrated the ability to write a mournful poetry solely by their own program.  This proves that AI can indeed do something that most of us thought they cannot.

Don’t Be Too Worry Yet
What I have stated will not be something that is easy to be done. Personally, I think such a powerful program that is able to perfectly create a certain product to tackle our problem needs more than 10 years to be created.  After all, AI intellectuals will have to design algorithms that enable the program to dissect plentiful inventions to understand how they work.  It will take quite a long time for training the program.  

Second, we have to believe that every technological revolution won’t just destroy the original world, as it simultaneously creates things that are beyond most people’s imagination.  For example, the emergence of steam-powered machines caused panic across workers.  Yet, restricted by the background of era, most of the people at that time did not realize the massive opportunities beneath.  New machines required more skillful workers with specific techniques to operate them.  Hence, like those who lived in the past, we are   limited by the current situation, making us extremely skeptical about AI systems.  What we need to do is to think about the treasure brought by the upcoming AI revolution, so as to embrace benefits of this great invention.  The key is to equip ourselves, not blaming the world. That is what keeps us “untouchable” as mentioned by Thomas Friedman.

For the very last part, we have to answer the question set at the beginning.  I think AI will eventually replace all jobs that we have seen, from artists to politicians.  However, like what has been mentioned, new opportunities will again be created.  So, to be more precise, AI will only partially replace humans, but not the whole part of us. 

Will Artificial Intelligence Replace Humans? (Part A)

Will Artificial Intelligence Replace Humans?  (Part A)

Since the epic battle amid Google AlphaGo and legendary 9 dan Go master Lee Sedol, deep learning has become a word that popularized around the globe. The historic match also raised a question that has been discussed for decades, “Will artificial intelligence eventually replace humans?”

As always, AlphaGo is just a recent example that we are all familiar with. We will use AlphaGo as an example, and let you have a basic understanding of neural networks by reading this article. For neural network gurus, you may read our upcoming article about future prediction of AI’s potential.

As we have all heard of, AlphaGo is a program that leverages artificial neural network to make decisions. Neural network is in fact morphing the structure of a human brain, which consists of a fundamental building block: neurons. Neurons have their own weight and they will deliver bias. Weight is somewhat like a function, which causes neuron’s output differs in accordance to the input value. As inter-connected neurons are aligned into layers, the initial output of the 1st neuron becomes the input of 2nd layer of neurons. Undergoing the process of “weighing”, the neurons deliver output value known as “bias”. After penetrating through all neural layers, the final output makes up the decision and result of the network.

Therefore, as you may observe, the onus is on the initial output. If we are not obtaining an accurate input, no matter how meticulously the layers are designed, the output is always considered useless. Meanwhile, layers in between the first and last neuron, known as hidden layers, are carrying an essential role too. If weights are not precise, the result will be disastrous as we can all predict.

A3

For the first problem, the accuracy of data of the initial neuron or the initial layer, greatly depends on the method and technology for obtaining raw data. AlphaGo is relatively simple for this process, as the chess are placed at limited boxes at fixed position. Yet, other neural network’s application, such as analyzing stock market by massive reading of analyst reports and financial news, might be much more troublesome. As programs have to apply the skill of natural language processing, a branch of neural network, so as to understand those metaphors used by the authors and refine useful information, the workload exponentially increases.

Therefore, the solution for the first problem is by utilizing advanced camera lens or sensors to perceive the surroundings. Google’s driverless car make use of LIDAR, a self-rotating radar that transmits ad receives pulsed lasers to have a clear understanding of the road condition. This clearly makes Google cars safer as they are able to detect the environment and react to the potential dangers at a rapid speed.

The second problem, which is the inaccurate weights and therefore causing inaccurate bias, can be resolved using a neural network training method called backward propagation. By training only two layers at a time, programmers are then able to acknowledge the malfunctioned neurons, so as to fix the problem by finely adjusting the weights. By having the generated output reversely imported into the second layer, if the two layers are able to create an output that is exactly the same as or very close to the initial output, it implies that these two layers are well-trained. All layers are theoretically able to attain the highest degree of accuracy, it is just the matter of money. Since outrageous number of calculations involved during the training process at every second, GPUs are often adopted to speed up working progress, as they are able to parallel process data, without the need of waiting for the prior result.

Will AI replace human in the long term? We will reveal it at in our next article.

From Pokemon Go to the Future of AR Gaming

From Pokemon Go to the Future of AR Gaming

Apart from the Line’s IPO (NYSE:LN), the biggest news associated with Japanese tech giants is definitely the launch of the mobile game Pokemon Go, and also the spectacular trend created by this application. 

However, in contrast with the articles which cut deep into the game, discussing the location-based services provided by Pokemon Go and potential benefits obtained, today we are going to put ourselves in a higher position. We are now scrutinizing how the game is able to affect the entire AR gaming ecosystem and how the game encourages the development of holograms.

For those who haven’t experienced the game like me, Pokemon Go is basically about seeking for Pokemons on the street. Through the lens of the camera, players are able to see the monster to be captured appearing in the actual world. I see this as a great application of AR technology, yet this is just a very early stage of AR gaming. 

As I have mentioned above, Pokemon trainers still need their phones to see the monsters. Therefore, I classify this as AR Gaming first generation (ARG 1G). To have a definition for this term, it basically concludes every AR game that require gamers to bring along their gadgets for complete gaming experiences. Moreover, the actions of characters are still not realistic enough. Take Pokemon Go as an example, though Pikachu has an almost identical appearance as what the Pokemon TV series have demonstrated, you can still easily distinguish Pikachu from actual world objects. Our consumer technology level up till this stage limits the fascinating performance that oughts to be created by AR systems, making players unable to fully imagine themselves as a Pokemon trainers. 

Therefore, how do we make AR gaming more realistic? That falls into the region of applications of holographic technology. Holograms are virtual images that are stereoscopic. These images appear in the real world, causing people to have an illusion that the images are part of the actual environment. Through applications of holograms, the experience will be massively improved. So, holograms will be the keyword for ARG 2G.

In fact, I believe that there are two stages for ARG 2G, and for simplicity, I will name them 2G A and 2G B. For 2G A, a gadget will still be required by gamers. They will need to carry the machine for displaying holograms throughout the gaming process. The machine might be a phone, or a head-mounted device like Hololens. To conclude with, 2G A basically only enhances the gaming experience, with almost no change on the convenience of gamers. 

Therefore, how do we transit to 2G B? The method is simple enough, yet the initial set-up cost might be quite high. As 2G B is classified as “creating holograms while players need not to carry a device with them”, companies providing AR gaming services will need to install holograms displayers at various venues. Furthermore, the corporations will have to ensure that holographic characters react according to players’ choice of action. That implies a large workload borne by the processing centre which is responsible for the centralized management of holograms creation. Nonetheless, I believe this is just a matter of cost. As long as service providers are willing to pay the bills, they will eventually earn the benefits, given that three-dimensional holograms are far much better than planar images of phones and tablets in terms of the level of reality. 

For the ultimate stage of ARG that I can imagine up till this moment, I believe the final evolution would be the perception that AR objects are having no difference with real world objects. I have mentioned that at hologram stage, people cannot separate holograms from the actual world. Yet, players just aren’t able to distinguish these two parts solely relying on their vision. Once they make use of their body parts to feel the holograms, immediately will they have a sense that they are still inside an AR gaming zone, not a totally real world situation. 

Back to the example of Pokemon Go, it means that if you kick Pikachu, your feet can feel the weight of the yellow pet. Depending on the force that you have used, the direction of the kick and other useful statistics for analyzing your movements, Pikachu will be crying for help three metres away from you. This simple action demonstrates the definition of ARG 3G. Once we reach this point, AR and the actual world becomes an inalienable part. You experience the same feeling interacting with AR characters and real world people. At this paramount AR evolution level, it is hard to tell the difference between a real world robot and an AR world robot. After all, it will no longer be necessary to use the term “AR”. 

Immersive and real experience have been the perpetual goal for VR and AR solutions providers to chase after. Oculus Rift or Microsoft, till the end of the day, they will eventually discover that what they are doing is actually the same: they are only different at the starting point, but they will meet each other at the destination.

Challenger of Soft Drinks Firms – The Right Cup

Challenger of Soft Drinks Firms – The Right Cup

Game Changer or Swindler?
Recently, a start-up company launched their fund-raising campaign on Indiegogo, which immediately caught everyone’s attention. By naming their product “The Right Cup”, users would have the feeling of drinking cider when they are actually drinking plain water. Is this just another scam that steals money from investors’ pockets, or a revolutionary product that has the potential to compete with Coca Cola or Pepsi and eventually be able to seize much of the market shares from these celebrated oligopolists?

Screen Shot 2016-05-07 at 9.52.00 pm

Screen Shot 2016-05-07 at 9.52.07 pm


Just a Black Magic?
It might sound unbelievable, yet rather than saying that we are tasting the food, it ought to be rephrased and becomes “smelling the food”. Why? Research has proven the fact that more than 80% of our judgment of the taste is based on the scent that sheds from the food itself, not by the tiny taste buds that are closely in contact with the delicacies. Biologists concluded that there 2 major ways for us to know the taste of a food. The 1st way is called orthonasal olfaction, meaning the process which we feel the smell with our nose. Typical examples include the experience of walking into a restaurant and get immersed in the pleasant scent of food. The 2nd way is called retronasal olfaction, implying the process that we feel the smell with our mouth. To put that simply, while you are munching your sandwich, you are tasting the lettuce through retronasal olfaction.

Screen Shot 2016-05-07 at 9.54.21 pm


So, How is it related to The Right Cup?
By knowing the basic principles that our brain obtains the information of smell, it is time to link it up with the recently released product. To simulate the experience of drinking tasty juices that are freshly made, the start-up company adds food additives with various flavors to the cup. For instance, they designed a set of cups that are comprised of four different fruit flavors, like apple, lemon lime and orange. Besides, through painting the cups in accordance with the original fruit color – like apple-flavored cups are painted in red – it helps our brain spontaneously associate with the times that we are actually enjoying cider. Therefore, when you fill the cup with water, add some ice cubes to it and take a sip of water, the scent from the food additives and the gaily-colored cup immediately fools your brain, letting you feel like drinking cider. This, is the working principle of this fascinating cup.

Bright Future Ahead?
After knowing how exactly this lovely cup works, it demonstrates an incredible potential to utterly alters the ecosystem of current soft drinks market. The most significant edge of the product is the focus on health. According to demographic studies made by several firms, the popularity of soft drinks has shown a close relationship with number of patients diagnosed with diabetes. Under the inevitable trend of healthy lifestyle, soft drinks together with junk food will no longer be as widely accepted as past times. The Right Cup can therefore take over those soft drinks hegemonies by introducing their product to the market. In my personal view, the only problem that exists in the future would just be the matter of price. Yet, I believe that as long as mass production commences, the average cost will soon be lowered to an affordable price.

Screen Shot 2016-05-07 at 9.54.53 pm

Moreover, the reusable property that The Right Cup possesses helps it to be competitive. No matter how the future price of soft drinks plummet, The Right Cup definitely has the upper hand. It is said by the company that the scent of the food additives can last for up to 6 months, an astonishingly long period for most of the customers. This implies that users can unlimitedly enjoy the experience of drinking cider within the time period.

Screen Shot 2016-05-07 at 9.52.21 pm

To conclude with, this gorgeous product will very likely be challenging the soft drinks giants in the future. Hope that the product can soon appear on shelves of the supermarkets for our enjoyment. 

TechBrink
05/07/2016

 

Image Sources:  Indiegogo

Probe into the Space: Explore the Massive Return of Asteroid Mining

Probe into the Space: Explore the Massive Return of Asteroid Mining

Ever since the beginning of human history, we have been imagining how the space looks like. During the cold war period, space shuttles and rockets brought astronauts to the Moon. This is a great leap in astronautics, yet, space missions are still costly and haven’t generated enough benefits for us, comparing with billions of dollars that we have invested. Perhaps, planet mining is the way of producing massive return.

First Step to an Ambitious Goal
Planetary Resources, a start-up company established in 2010, has been working on technologies of prospecting asteroids rich in minerals. The Redmond-based company has launched a satellite, known as Arkyd-3 reflight, in 2015. The satellite aims to validate the avionics and controlling system developed by the company, so as to pave way for future missions.

Current Business Plan
Though the ultimate goal of the company is to establish fully automated mining system on asteroids, which will potentially topple the commodity market, the current business plan of the corporation is to rent out or sell their telescopes to consumers. The first prototypical product, known as Arkyd-100, is proclaimed by the company to be the first publicly accessible space telescope. The telescope is said to be cost reduced, owing to the modular approach of the telescopes. The de-centralized hardware system allows faults to be contained at a certain region, while normal operations can still continue. Engineers only need to fix a small part of the components, hence greatly reducing the repairing costs. Such telescopes will be used for observing the Earth for astronomical purpose.

Once the company obtains sufficient benefits from massive R&D expenses, including designing, fabricating and launching of the telescopes and satellites, the company will implement the second stage of the plan. By sending survey probes into the space, those vehicles will perform deep scanning mission on near-Earth asteroids, seeking for a worth mining target. Planetary Resources itself predicts that the process of searching for target may take a decade, which will be affected by the maturity level of their technologies. As we all know, space environment is dire, both to astronauts and machines. The outer space environment with pervasive high frequency radiations poses great impacts on functionality of the meticulosity designed component. To contend with the problem of interference, the company has been improving their avionics technology.

To the Bright Future of Mankind
Planetary Resources envisions to reach two long-term goals. The first one is to install and popularize their fully automated mining system. After mining androids prospect the minerals on asteroids, ores are sent to space station on Earth for several purposes. Moreover, water will also be produced in outer space, for instance, by fusion of hydrogen and oxygen atoms. The water are stockpiled in “orbital propellant depots”, the term designed by the company, so as to provide raw materials for production of rocket fuels. The entire process from prospecting to processing of ore will be fully directed by artificial intelligence, without human intervention.

The second long-term goal is to control the orbits of asteroids. The corporation foresees that near-Earth asteroids may be dangerous to our planet, making such technology essential to us. By actively manipulating the trajectory of asteroids, casualties and economic losses can be efficiently reduced.

Back to present, we have to admit that technologies are still at an early stage. Asteroid prospecting is still costly to us, but as long as we persist in the development of these futuristic technologies, the efforts made eventually be proven to be prescient. To our bright future, the government and us should continue to bolster this industry. Perhaps after decades, asteroid minings will finally become true.

 

By Techbrink
15/02/2016

Image Source: Raw Science TV

Education in Future – How Technology is Changing this Field

Education in Future – How Technology is Changing this Field

Conventionally, the major subjects in schools are languages and Mathematics. However, I am certain that future generations no longer need to take second language lessons, while other lessons will emerge and substitute the so called “old-fashioned” subjects. Why? A rather nebulous reason is due to technological advancement.

First, second language lessons will eventually vanish from the core subjects list. Efforts are made by companies to provide instant translation services, for instance, the joint research teams at Microsoft and Skype have developed real-time translators. These cloud based service breaks cultural differences between individuals, so that everyone can communicate freely using their mother-tongue, instead of learning another language from scratch. As people no longer need to spend time learning additional languages just for communication purpose, it considerably saves time for other usage. Moreover, technology eases the understanding barrier between people, as translators automatically turn intricate jargons into everyday life vocabularies. All of us will soon appreciate the complicated concepts those gurus attempt to explain within seconds. Such cutting edge interpretation technology will become a great success, as it promotes widespread of knowledge across the globe, bolstering exchange of ideas and leads to rapid evolution of all industries. Imagine the tremendous synergic effect of scientists and pundits around the world to collaborate and work together, you will realize that instant translation indeed have a great potential of changing the world. And if we do achieve the goal, it is clear that it is not necessary for future generations to learn second languages. Instead, it becomes an elective subject for those who really love languages and are willing to learn more spontaneously.

Secondly, computer will become the core subject at schools. The importance of computer science will gradually emerge when people finally realize that they can’t live without robots. As artificial intelligence (AI) progressively becomes mature, (for example, androids can do most of our housework, and never need to rest nor sleep), it is inevitable that robots are surpassing us and eliminating many jobs. Therefore, how do we make a living? One way is to acquire programming skills. As we all know, part of the reasons why we are learning core subjects is that it enhances our competitiveness. Take English as an example, it is a world wide language for people to communicate. It would be beneficial for our future career if we can speak fluent English. Therefore, if robots are going to be omnipresent, we will soon see computer science in the core subjects list. This will become a global trend in the educational field, and as soon as governments or schools realize the fact that we are going to have numerous working robots, they are going to adjust their curriculum.

When we talk about the problems of educational systems around the world, one of the most criticized problems is that students find learning a waste of time. In fact, most of us are not applying our Physics or Chemistry knowledge in our daily lives, except those who aims to become a teacher or an engineer. For years, schools are neglecting the importance of financial education, mainly due to lack of teachers. Inadequate government endorsement is another key factor leading to the problem. As Virtual Reality (VR) devices are becoming more affordable, students can learn the working principles of financial markets in a more efficient way. The immersive experience of letting students to trade stocks, purchase bonds and borrow money makes the simulation process realistic, so as to achieve the goal of understanding more about finance.

 

By Techbrink

Food Future: Molecular Gastronomy, In-Vitro Meat and Food Pills

Food Future: Molecular Gastronomy, In-Vitro Meat and Food Pills

In science fiction movies, we have seen many futuristic ideas about food. Are we ready for these crazy inventions? Are we going to replace conventional food with these alternative food choices?

Molecular Gastronomy:
Biozoon, a German based company, created a series of synthetic food. Under its broad product line, the flagship product TexturePro allows customers to produce their meals within minutes. TexturePro includes various flavours of food in powder form, from fresh salads to delicious broths. By simply adding the powder into a pot of boiling water, you will get a bowl of flavoured solution. Leave it alone to cool down, when you take a spoon and scoop up the solution, you get a jelly like object. This is the way you create your meal with the product. The good way of eating like this is that it is both low in fat and calories. However, the product sacrificed one of the most important things for food, texture, which somehow seems ironic due to the product name TexturePro. With curry and steak all tasting like gel, it is hard for the company to convince consumers to purchase their products and maintain a habit of eating these jelly food. Obviously, the product would suit the elderly people the most, as they are now able to enjoy the taste of food which their denture isn’t strong enough to chew.

In-Vitro Meat
Conventionally, we slaughter animals like cows and pigs to get their meat. Yet, some vegetarians have denounced that killing and hunting animals for their flesh is inhumane behaviour. Meat market therefore lost lots of customers. But from now on, vegetarians may be convinced to have meat in their meals again, thanks to the new in-vitro meat technology, which enables meat being obtained in a much more humane way. Scientists from Maastricht University have made significant breakthroughs on fabricating meat in artificial ways. Through painlessly extracting muscle cells from livestock, the research team then nurture the cells inside a petri dish. As the muscle cells gradually grow, a piece of raw meat is therefore produced. The new method reduces carbon dioxide emission that was caused by rearing animals, meanwhile, tackling the problem of inhumanity of slaughtering animals. Unlike molecular gastronomy, in-vitro meat can provide the original taste of meat. Concerns of the public are mainly about the time needed for incubating the meat and the high cost for mass production.

Food Pills
The idea of food pills has been mentioned in many sci-fi masterpieces, yet never been invented in real life situation. One of the problems is the tremendous amount of pills we need to intake per day. By converting the 2,000 calories that we require each day into pill form, the amount of pills you have to intake will be around 0.22 kg, not to mention the tablets you need for lipids, proteins, minerals and vitamins. Instead of taking so many capsules, people would rather prefer eating conventional food, like a pizza or a bowl of udon noodles. There are no incentives for people to sustain life in such an cumbersome way.

In conclusion, though we foresee the future of pervasive artificially synthesized food, at present, we are still going to have our regular meals with natural vegetables and grains. Due to technological and research capital constraints, food pills are still not prevalent. Meanwhile, more time is needed for the mass to adopt new products, as there might still be resistance towards these inventions, like food safety suspicion. After all, I still believe that future food will take less time to cook, and are richer in nutrients. This is an inevitable developing trend of our future food.

 

By Techbrink
18/01/2016