Archive for the ‘Technology’ Category

Coronavirus Tests: what you need to know

Friday, August 21st, 2020

This blog post will focus on the tests that have been offered for the coronavirus pandemia in order to evaluate individually and in geolocations how the virus is spreading. I am writing this post since most people do not really understand how these tests work, their differences, specificity, efficacy and also what means to have a positive and/or negative test depending on the technology used. I would say that 99% of people is “blind” when it comes to information on the tests and how to proceed after receiving a result. Well, since I am a researcher and have been reading a lot about the types of test, I will try to describe the ones in the market in an easy way to understand with no technical jargons. The main problem, like everything related to this coronavirus pandemia is misinformation or lack of it; this is expected since people in general know nothing about virology and the technologies used in research and clinical laboratories. In addition, most governments are not helping at all with confusing guidelines. There are two main ways to detect viral infections: immunoassays and by the PCR technique. Immunoassays are the widespread ones since they are cheaper and need just a droplet of blood or body fluids. PCR which means “Polymerase Chain Reaction” is a classic technique used in molecular biology since the 1990s. It was invented in 1989 and I have been using it since my undergraduate academic studies; this is not new at all. However, the way it is used and the type of it can impact the results. I will start with the immunoassay tests and then discuss PCR. Lastly, I will give my take home message with some guidelines and remarks on how people should proceed.


Immunoassays have been used in laboratories for a while. I will not discuss the types of tests but how it works. These assays will detect an immune reaction caused by a pathogen, such as the virus, since our immune system generates what we call antibodies against the pathogen and a cellular “memory” against it in some cases.  One of the problems with this coronavirus is that we still do not know how long the immune cells will “remember” the virus and generate a better response in a second infection (the base to develop an efficient vaccine). This is still open to discussion. There are several ways to do immunoassays, but the best way is to detect indirectly the presence of these antibodies. Most antibodies are created against proteins in the outside part of the viruses. Thus, this type of test is indirect and will detect a response against the virus. Here that the problem happens. Most tests are binary: yes or no for the presence of antibodies against that specific virus (in this case the coronavirus). Just to make sure people understand: these tests are not measuring the virus, but a body response to it which will vary individually. The main problems that can happen are what we call false positives and false negatives. False positives happen when even without coronavirus infection you have a positive test. The main explanation for this is a cross-reaction with other viruses that the person was infected in the past that are very similar to the current virus. Another important point is that immunoassays will only detect the reaction of the body to the virus after the infection when the body was able to create antibodies after getting in contact with the virus. So, if you do these tests early in the timeline of the infection you can get what we call false negative. Finally, the pros to these tests is that they are fast, cheap and need just a few people and reagents. These are the ones that most people are doing now globally.

PCR and qPCR

The basic steps of the PCR test start with sticking a long swab far into a person’s nose (body fluid) or getting the person’s blood. To clean up the starting material, most test protocols then call for performing an assay that extracts out just the genetic material, clearing away all the debris and inconsequential bits. This requires time and specific reagents, which can be in short supply. PCR, like I wrote before, is not a new technology. However, in this case PCR will measure directly the presence of the viral particles (the nucleic acids or genetic material of the coronavirus) in the blood and/or body fluids. Thus, this test is mainly indicated during the infection period timeline since it will measure directly if the viral particles are circulating in the body. If you do the PCR test before the infection or after it, most likely it will be a negative test. The “upgrade” so to speak of the PCR is the qPCR or quantitative PCR, which can measure the number of viral particles per volume of blood and body fluid. For example, this is the test used to detect and to administer HIV viral infections in AIDS patients. That is the “state of the art” method indicated to detect the virus, even in small amounts. In this case a false negative will be very unlikely since it is designed for that specific virus. False positives could happen if the PCR test was badly designed and gets the genetic material of other viral particles infecting the person. These types of test have lots of pros since they are very specific; false positives and false negatives will be almost impossible. The cons are mainly the time it takes to get a result (generally days to a week), it is laborious, involves more people, reagents and it is expensive to be used for big populations. Moreover, most health insurance companies probably will not cover it, especially in the US. Recently, new technologies based in PCR were improved to decrease the costs, increase the specificity and efficacy making it more accessible (see this article for more information).

Take home message

The main message for the readers of this post is that there is no test that is 100% efficient and reliable, however as shown above in a very clear way qPCR is the best test if you are looking for a reliable result. However, be careful, the PCR tests should be used during the timeline of the viral infection since it will measure the viral particles directly. On the other hand, the immunoassays (there are several fast tests based in immunoassays) will measure indirectly the response of the body to the infection. Thus, these tests would be indicated maybe in the end of the timeline of the infection and after some time to see how long we can detect antibodies against the virus. All this information is very complex to understand but I believe these simple tips I am giving here are important to the general population and even governments to make decisions. Last, but not least, independent on the test you did or will do please do it at least 3 times, especially the immunoassays. For the PCR if you got a positive test it means you have an infection by the coronavirus at that moment. Then protect yourself and others and stay alert to symptoms. The tips I am giving here could be helpful so you won’t be confused by all this deluge of information (most times “fake news”) about tests and their results. Be aware and good luck – to all of us. We are in this together.

Big Tech Digital Health Revolution?

Friday, December 6th, 2019

A series of breakthroughs in biomedical science and digital technologies are triggering a convergence between the tech industry and the life sciences industries that will quickly lead to more intimate and interactive relations among patients, their doctors and biopharmaceutical companies. The ability to share information and data has changed completely in the past 10 years with the emergence of both search engines, social media, and now tools for Artificial Intelligence (AI), especially with companies such as Microsoft, Google, Facebook and now Amazon getting into healthcare. In life sciences, novel web technologies have been used to track diseases in more efficient ways. Today, millions of people around the world that have access to the Internet will engage in Internet-enabled self-diagnosis for diseases by using search engines and other web tools. However, self-diagnosis is dangerous if the sources are not trustful enough. Another concern regarding these new technologies applied to healthcare and life sciences is patient privacy, data transfer and data storage (something “cloud” services now has solved with some specific concerns). Therefore, the question now is ‘‘can these tools be applied to healthcare and biomedicine in order to improve scientific and medical research?’’ I’ve been publishing articles about this digital revolution since 2012 (for some articles I’ve wrote on this subject check “Social Networks, Web-based Tools and Diseases: implications for Biomedical Research” and “Big Data in Biomedicine”). Well, this week the cover of the British magazine Newsweek and its headline puts Jeff Bezos, CEO of Amazon, in the forefront of solving the healthcare and life sciences problems now and in the future; not just in the United States but in the world (for more information check the Newsweek article here). Well, I am a little skeptical and concerned but excited at the same time on what is coming from Amazon (even Apple, Google and other tech companies are developing their own solutions). The truth is that the web has impacted our society, but over the next few years, the web and Apps will entirely re-invent the way scientists and physicians interact, especially with patients. There is indeed a conundrum faced by life sciences because both pharmaceutical and biotechnology industries need information from people (mostly patients), patients need information from these companies and the life sciences industry also need to collaborate outside their walls building connections with other industries (such as big tech and other institutions). Science and the biomedical field need to be more open and scientists/physicians have to be able to share data to speed up the process of transition between basic to translational research. As regulatory clarity emerges with better algorithms, AI and data encryption to ease privacy issues, more companies will adopt digital tools and these technologies will have the potential to revolutionize healthcare, clinical trials and research the same way that they are impacting our society. Developing solutions to allow better ways to share information will add value to the life sciences and biomedical communities. In fact, science and medicine are undergoing one of the most exciting changes in history. The full adoption of online platforms and telemedicine by physicians will transform medicine. Apps that can diagnose patients in seconds are already in the market. Alexa, the speech recognition device from Amazon could be (and will be) able to diagnose diseases based on what people tell it daily. A whole new world in healthcare could be build and the main players will be big tech companies. Healthcare professionals will start to interact and communicate in ways that will have an enormous impact on disease diagnosis, scientific progress, drug discovery and in the development of new treatments for complex, rare and unknown diseases. I bet Amazon has several “secrets” that it will be unveiling in the coming years applied to healthcare. Did anybody think of Amazon, an online Book selling website in the 1990s, as of what it has become? Let’s see what they will disrupt next in healthcare and life sciences’ fields.

Bridging Innovation between Academia and the Private Sector

Thursday, March 30th, 2017

Recent studies of Innovation have pointed to the growing relevance of external sources of innovation. Rather than relying on internal Research & Development (R&D), private companies and organizations are reported to increasingly engage in ‘Open Innovation’, especially with the Academic Sector (check more information in the article “Relationship-based university-industry links and open innovation: towards a research agenda”). This means innovation can be regarded as resulting from distributed inter-organizational networks, rather than single companies. In the same vein, various concepts of ‘interactive’ innovation have been put forward to understand the non-linear, iterative and multi-agent character of innovation processes. But what is the definition of Innovation nowadays? According to different sources, innovation could be defined as the process of translating an idea or invention into a good or service that creates value or for which customers will pay. To be called an innovation, an idea must be replicable at an economical cost and must satisfy a specific need. Innovation involves deliberate application of information, imagination and initiative in deriving greater or different values from resources, and includes all processes by which new ideas are generated and converted into useful products. In business and enterprises, innovation often results when the company uses ideas in order to further satisfy the needs and expectations of the customers. Well, we know that most of the new discoveries occur in the Academic Sector and the Private Sector, even though having R&D, lags behind in some aspects. Depending on the country this will be different too and bridging innovation licensing and technology transfer between both sectors is not trivial. Today we see several Start Up Incubators and Accelerators popping up all over the world, especially in the United States. Each one of these has a different model and duration, however most are not connected directly to Universities. In the last 5-10 years, this is changing. We are starting to see established public companies with big valuations in the technology sector and also pharmaceutical companies opening spaces for Innovation, Incubation of Start Ups and even Start Up Accelerators to foster new Ideas that could be used to create new solutions and/or products for society. These are nice initiatives. For example Johnson & Johnson has the JLABS, with the main vision of empower and enable science to reach people that needs it faster. Will they work bringing more products to the company pipeline? We will see in the near future. However, technology transfer deals between academia and the private sector are not easy. We have a few professionals that are experts in this field, even lawyers that do not understand or are not interested. But, some successful deals happened and are happening. I have been the Director of a TTO in a University for a year and I’ve seen all the problems related to doing these types of deals. TTOs help Universities to commercialize Intellectual Property and Research developed inside their Campus. My take on these problems: mostly lack of efficiency in the processes. Why? What are the main problems in tech transfer and licensing between the Universities and the Private Sector? First, the academic professor and researcher from the University lack business skills, since he was not trained for that. In this case, the University must have a Technology Transfer Office (TTO) or a similar Department that supports the Professor in these cases. Secondly, the University still has some aversion of transfering a product or solution to companies that will bring it to market. I believe this is a cultural problem. When we say University, everybody has in mind a teaching Institution without the objective of having revenue and profit (and this is not the reality, Universities need a source of revenue that is not just Student tuition and private donations). Thirdly, when both sides, the University TTO and the representatives of the private sector seat in a big “marble” Table to discuss Terms and Conditions, the University is always the weaker part even though everything started there with their own resources. That could be justifiable since after the deal most of the risk until the new product goes to market will be from the company licensing it. What the future holds for Licensing and Tech Transfer? Well, I believe that all Universities must have a TTO with well-trained people to get the best deals for the University and the Private Sector needs to change the approach when dealing with an Institution that started the whole development of some product or service of their interest. It needs to be a “win win” situation otherwise it is a waste of time and money from both sides. I believe this a good start and things are changing in Tech Transfer and Licensing. However, both sides, especially the University needs to be more “professional” and business-oriented when closing these types of deal because, in the end, they will get their piece of the money pie if everything works well with no risk in the long term in most cases.

My Trip to China

Thursday, December 29th, 2016

Last November, the Chinese Government selected five Brazilian representatives (see Image and Legend below) for the ITTN Program and I was one of them. I was selected to stay the whole month of November in China since I am the Director of Innovation and Research at UCB in Brasilia, the capital of Brazil. The selection criteria was the Innovation Stage and Evolution of the Academic Center of the Institution you are linked to and its current leadership. My Curriculum and Merit as the Director of Innovation was also evaluated. In 22 days, we had the chance to visit and do specific training in various Chinese cities, such as Shanghai, Beijing, Furzhou, Shijiazhuang, Nanjing, etc. Groups of people representing more than 15 countries such as Russia, South Africa, Belarus, Serbia, Thailand, India and others were also selected for this Training. Well, I just needed to write a Blog post dedicated to this trip since I’ve changed my vision about China. ITTN stands for “International Technology Transfer Network” and is a non-profit organization based in China that is dedicated to promoting international technology transfer and innovation services for organizations from different countries around the world. ITTN has already developed long-term partnerships with more than 400 foreign technology transfer organizations from more than 40 countries and more than 500 national organizations in China, which is a major Globalized resource collaboration network. After this trip I can say that China has changed, and a lot. Even though there are lots of problems, such as poverty, pollution, violence, etc I saw a country very focused in a strategic planning for the future, especially in sectors such as technology and Innovation. We had the chance to visit Universities, Hospitals, several Technology Parks and Company Incubators and I was blown away. The experience in all cities that we have visited was extremely positive. The country is in full growth in the productive and civil construction sectors. We had the opportunity to travel by bullet train between cities and the trains can reach up to 400 km/h. They built a huge railway network based system. We were able to see impressive work of infrastructure with technological parks connected to Shopping malls, housing and leisure area everywhere. In all the cities we have visited, there is a constant concern with the cleaning of air pollution, green energy and reduction of the emission of toxic gas, mainly carbon monoxide of the cars since the country has 1.3 billion people. China has been investing heavily in technologies to end air toxicity, which can reach extreme levels as we have seen closely in Beijing and Shijiazhuang, for example. Air pollution can be felt with a strong smell of toxic gases. Many people have to wear masks on days with extreme pollution. In the case of cities like Shanghai and Nanjing, pollution is already being better controlled by some measures that limit the number of cars per family and the use of small electric motorcycles. One fact that impressed me was the cleanliness of Chinese cities and all public transportation. My previous impression was that the Chinese cities were extremely dirty, especially the streets. This fact was demystified during my visit. In my point of view, the main highlights of this trip was China’s strategic planning to become the world’s first economy by 2020. The Chinese are bringing qualified professionals and people from around the globe to demonstrate how they will achieve this goal through: 1) Construction of Technology Parks in association with Universities in several key cities; 2) Changing the context of a country that supplies raw material, such as Brazil, to become an importer and developer of new technologies – mainly by technology transfer and licensing and 3) Strategically planning the entire scientific and technological sector together with the productive sector by building companies (mainly Spin-offs from the Academic Sector and Start Ups) in order to generate public-private partnerships. So the main highlight of my visit to China is that the Chinese people, as a nation, are making an impressive action plan focusing on specific areas of Technology and Innovation that are strategic to transform a country into a world leader. Thus, here is my advice: watch it out United States of America (USA), the Chinese are coming; fast and furious…

From right to left: Myself, Pascale from PUC-RGS, Cristina from FORTEC, Agnaldo from SEBRAE and Tais from PUC-RIO. Source: PUC-RIO’s Website.

Smart and Fast Data, not just Big Data…

Friday, April 22nd, 2016

I’ve just finished reading the book “Big Data: Using SMART Big Data, Analytics and Metrics To Make Better Decisions and Improve Performance” written by Bernard Marr. This book is a must read for Data Scientists and general people interested in data analytics. It is now clear that at the end of the day, it’s not about how much data you generate (in a scientific laboratory, in healthcare, in logistics, in the financial world, etc), it’s about how well you use it. Though Big Data has been recently deemed an over-hyped term, it’s not going to go away any time soon. I believe it will be the opposite; data science will be applied to all sectors in society. Information overload is a phenomenon and challenge we face all the time now. In fact, large-scale data analytics, predictive modeling, and data visualization are increasingly crucial for companies in both high-tech and mainstream fields to survive nowadays. Big data capabilities are a need, not a want today or tomorrow. Big Data is a broad term that encompasses a variety of angles. There are complex challenges within Big Data that must be prioritized and addressed; such as “Fast Data” and “Smart Data.” “Smart Data” means information that actually makes sense. It is the difference between seeing a long list of numbers referring to weekly sales vs. identifying the peaks and troughs in sales volume over time. Algorithms turn meaningless numbers into actionable insights. Smart data is data from which signals and patterns have been extracted by intelligent algorithms. Collecting large amounts of statistics and numbers bring little benefit if there is no layer of added intelligence and expertise. By “Fast Data” I am talking about as-it-happens information enabling real-time decision-making. An advertising firm, for example, needs to know how people are talking about their clients’ brands in real-time in order to mitigate bad messages. A few minutes too late and viral messages might be uncontainable. A retail company needs to know how their latest collection is selling as soon as it is released to continue or stop selling it. Public health workers need to understand disease outbreaks in the moment so they can take action to curb the spread. One example is Google Flu Trends detecting spikes in flu searches from Google. Twitter uses the same strategy to evaluate outbreaks of infectious diseases. A bank needs to stay abreast of geo-political and socio-economic situations to make the best investment decisions using a global-macro strategy. A logistics company needs to know how a public disaster or road diversion is affecting transport infrastructure so that they can react accordingly. One of the biggest evolutions of integrating smarter data into content experiences is that it levels the playing field with larger competitors who may have more resources to burn on advertising media. Using the totality of visitors as a whole – and deriving meaning from all of their content experiences, we can deliver more relevant and contextual experiences than our competitors. And today, we can deliver those solutions in much less expensive ways than the multi-million dollar solutions that may have historically been out of our reach. Big Data is just a problem. Smart Data is a solution that changes the game of marketing, and how we deliver better solutions for customers from this point forward. According to John Bollen, the four keys to converting big data into smart data are: 1) Organize and manage resources; 2) Identify your customers and/or targets; 3) Target this specific group of people and evaluate the outcome in real-time and 4) Use data analytics to look forward and to do forecasts. In addition, we have to remember that technology doesn’t solve the problem of changing big data to smart data. It’s more about process than technology. While tools are getting better at aggregating and parsing data, it’s ultimately up to us as data scientists to connect past behavior to future wants, preferences, needs, etc. The technological advancements seen all the way back to the Industrial Revolution have been about automating a manual process, not making us smarter. The questions without answers are “what processes are in place to handle the data?”, “what governance is in place?” (in other words, who’ll make decisions that extend from the data?), and finally, “how do we operationalize the data”? Technology won’t answer these questions. First, we must have the right people in place and processes established, then we look at how technology fits in. Smart and fast data generate reliable answers; however we need the right decision makers in the end.

Image Source: Digitalroute

The Acquisition Boom in Technology

Wednesday, March 26th, 2014

In this post, I will leave science behind and discuss the technology “craziness” that has been happening. Billion Dollar acquisitions of companies that not even have revenue or profit. The Internet revolution now is eccentric and transforms youngsters with an app idea in millionaires and billionaires from day to night.  In less than 2 months, for example, Facebook acquired Whatsapp, the Drone maker Titan Aerospace and now the virtual reality company Oculus. A mix of software and hardware and new technologies are becoming the sexiest thing to acquire nowadays. Companies with tons of cash such as Facebook, Apple, Microsoft, Google, Yahoo and others can choose what to buy and how to buy. We are living in the new era of the internet. Working in Wall Street in the  financial district getting millions in investment deals is not the big American Dream anymore. The dream is to have an idea, build a start up and sell it for millions or billions of dollars to the big ones. Or even grow it to become the next Facebook or Google. One interesting case is Whatsapp, that was sold to Facebook by around 16 billion dollars. If you think about it, this is a lot of money, but strategically, it was a bold move. Whatsapp has no revenue whatsoever or even sell adds. It is simple, just a messaging app, but the number of Users builds its value. Let’s make the math: Facebook has around 1.2 billion users and of these probably millions use Whatsapp or have an account of it. However, millions that use Whatsapp (which is probably 200 million people) doesn’t have a Facebook account. Thus, Facebook indirectly bought new users that Whatsapp has to incorporate in its platform. I use Whatsapp to message people from family, friends and colleagues and see the trend in txt messaging. It is simple, useful, and the bottom line is that it does the job. Another example is Google buying Nest for 3 billion dollars. As its own early dive into wearables with Google Glass demonstrates, Google knows it can’t miss this next big leap in hardware. And Nest provides what they want in terms of geolocation of thermostats. The last big move occurred these days with Facebook buying the virtual reality company Oculus by 2 billion dollars. This is another example of a software company buying a hardware company. Facebook believes Oculus has the potential to be the most social platform ever by enabling sharing not just moments with your friends online, but entire experiences and adventures. This might be a competitive move by Facebook towards Google Glass Platform. Since after the burst of the bubble of the internet in the beginning of 2000, we have been watching another bubble of tech internet companies popping up and acquisitions of billions occurring. It has been all about strategy and new technology, not profits or revenue increase. The companies that are in the big “club” such as Facebook, Apple, Google, Yahoo and Microsoft are positioning themselves in a competitive scenario. The only question is who is going to be successful in the long run with these acquisitions. There is a promising future in the tech sector. Are we close to the movie “Minority Report” with virtual reality? Facebook believes so. We will see.

A Digital Revolution in Healthcare?

Friday, January 24th, 2014

Recent advances in information technology have opened an exciting new frontier. A convergence is taking place between medicine and wireless technology, making it possible to change the way medicine and healthcare delivery are managed. In the coming years, this digital revolution in medicine could have far-reaching and more dramatic implications, changing the face of healthcare around the world. Mobile sensors and advanced processors are already enhancing doctors understanding of their patients signs and symptoms, leading to more personalized and effective ways to maintain health, detect problems, and treat illness. In our increasingly connected world, where more than 6 billion people have access to mobile devices, this individualized healthcare data has the potential to be collected and distributed nearly instantly, resulting in improved care for people of all ages, in every area of the world, who suffer from virtually any disease, from diabetes to Alzheimer’s to breast cancer. The two major drivers of the increased interest in digital healthcare are both the emergence of online networks (mainly because the internet is getting faster) and big data accumulation, mostly coming from medical records and scientific research (for more information see my recent article “The impact of online networks and big data in Life Sciences” at the Social Networking Journal). In fact, there are already more than 10,000 medical and healthcare apps available for download from Apple iTunes store and this is the third-fastest growing category among iPhone and Android users, showing no sign of slowing with the mobile health market estimated to grow ten times from U$1.2 billion in 2011, to U$12 billion in 2018. The major barriers to speed up the application of digitalized information in healthcare are the privacy of the data and regulation hurdles (see more at my article “Big Data in Biomedicine” published on Drug Discovery Today). However, I believe that technology always overcomes all barriers, especially when big amounts of cash are been thrown in a specific sector such as healthcare. For example, health is the number two most searched subject on the web.The mobile industry impacts in this change, mainly because almost all humans on earth now own a smartphone with internet and search capabilities.Ultimately, this revolution will be driven by “machine to machine” interactions. For this new digital era to be truly transformational, apps need to transmit information directly to healthcare providers. This will allow the healthcare systems to not only react, but to predict health outcomes based on a patient’s personal information. mHealth or mobile Health will allow communication and monitoring to take place like never before. Your body will talk to the healthcare system without you even knowing it. This new revolution started now and won’t stop. If it is to save more lives and so people could live more and better, so be it! (Image Source: Huffingtonpost)

Can Google Help Us Extend Lifespan?

Monday, October 7th, 2013

If everyone on the Forbes company list and the world’s millionaires and billionaires simultaneously (and tragically) got cancer, or Parkinson’s (or any other complex disease), the world would probably be well on its way to finding a cure for these illnesses, thanks to the enormous wealth that would be incentivized to back those efforts (check the article “WTF Is Calico, And Why Does Google Think Its Mysterious New Company Can Defy Aging?” for more information). Finding a cure for an intractable disease requires time, enormous amounts of human and financial capital, cooperation and a lot of basic and translational research – and at least a few public-private partnerships. It’s costly, it is complex and it’s messy. Nobody thinks on how to tackle these diseases and find better treatments or even a cure; the truth is that until you get sick you don’t really care. This is why Calico, Google’s newest mad science project, is potentially interesting and it was all over the news a couple of weeks ago. Calico is a healthcare company that Google is investing in that will be able to use all the crawled information in Google’s servers and databases to seek for valuable clues for treating diseases and eventually beating death. Crunching data into actionable information is Google’s future with the driverless cars, Google Glass and other “crazy” projects from the arm of Google Labs named Google [X] that Sergey Brin leads. Calico can use Google’s reach to make health data available, actionable, organize it and create a community around it, opening unprecedented opportunities for patient engagement with health data. Big Data analytics in healthcare is having a lot of attention right now and Google couldn’t be behind. Some say this is the boldest bet yet (see more at the article “Google vs. Death” in the Cover of TIME Magazine). Calico represents the company’s largest healthcare initiative since Google Health sprinted its way into obscurity and was shut down. Larry Page says he wants to solve cancer, but as a researcher in oncology for a decade, I would say this problem is more complex than the surface of it reveals. I really think it is amazing what Google has done and it is doing, and believe it takes a company with so many resources to attack and find cures for diseases such as cancer. There are many components in Google’s history indicating that they will be able to at least help: 1) they have access to tons of data points, especially published data from scientific journals; 2) Google may not solve death or help us beat it but at least it will help us understand the aging process better using our own data in the web using time as a variable; 3) at the very least it brings all kinds of attention from industry and people, especially those already sick (see more at “What Google’s Calico Means For Healthcare”).There is still a significant gap between researchers and entrepreneurs. However, at the same time, there are now more opportunities than ever before for both sides to team up to make technological breakthroughs in healthcare, and, in turn, making those breakthroughs more accessible to the general public. So, I believe that if Calico can’t give us immortality and cure complex diseases, at least it is bringing the focus of the industries to healthcare, one of the sectors that did not have many technological breakthroughs in years. Maybe we should focus our ideas and ventures in the healthcare sector like Google is starting to do. If they are successful, we all win. But if they are not, it is just another “crazy” project from Google [X] labs that failed. We will see. (Image Source: TIME Magazine)

Chicago – an Emerging Biotech Hub

Tuesday, September 3rd, 2013

Article by Charlotte Chen, Ph.D. Candidate at Northwestern University

CHICAGO – While Boston and Silicon Valley seem to be the most visible biotech hubs in the US, the nation’s best kept secret in biotech may just be in the Midwest. Indeed, in 2011, the eight-state Midwestern super cluster – Illinois, Ohio, Wisconsin, Minnesota, Michigan, Indiana, Missouri, Kansas, Iowa – employed over 120,000 more employees than either California or the East Coast (Ernst & Young, 2013), with more than 377,900 employees in over 16,800 establishments. The key player in The Midwest Super Cluster is undeniably Illinois. The state is home to more than 3,500 biotech companies that employ around 81,000 people, with a gross economic output of around $98.6 billion (Illinois Government News Network, 2013). Chicago sits at the center of all this, with some of the best medical research universities in the Midwest and a vibrant biotech industry. In the Chicago metropolitan area alone, around 45,000 employees work in about one thousand biotech companies. Some big-name biotechnology companies headquartered in the greater Chicago area include Abbott Laboratories, credited with developing the world’s first HIV blood screening test in 1985 and which now has 90,000 employees across 130 countries, and Baxter Healthcare, known for its hemophilia, renal disease, and immune disorder therapies, which brought in $10.4 billion in earnings in 2006. So why does Chicago get overshadowed by its counterpart biotech hubs on the East and West coasts? One factor could be a lack of entrepreneurial tradition in biotech. Silicon Valley and Boston have created an industry culture that embraces risk-taking, and entrepreneurs expect to fail a number of times before hitting a winning idea. While the Midwest has all the expertise to drive biotech innovation, this entrepreneurial spirit is only recently beginning to flourish. In 2012, $150 million venture dollars went into biotech startups in Chicago (City of Chicago, 2013), versus $1 billion in Silicon Valley and $860 million in Boston (MIT Technology Review, 2013). Hoping to raise Chicago’s profile in the biotech world, state and city policymakers recently established two new biotech incubators, which are set to open in Chicago within the next year. The first of these incubators is the Health, Technology, and Innovation (HTI) facility, created by Governor Pat Quinn in partnership with the University of Illinois at Chicago, with the goal of boosting Illinois’ economy. HTI is located at Chicago Technology Park in the city’s Near West Side, within the Illinois Medical District and in close proximity to the University of Chicago’s Pritzker School of Medicine, and is set to open in the summer of 2013. The State of Illinois and UIC split the bill equitably, each contributing $1.7 million for a total of $3.4 million in initial funding. HTI will provide wet and dry lab space and office space to tenants, as well as access to a network of scientists, investors, and other leaders in the biotech industry. The incubator is intended to provide a low-risk environment for academic researchers to bring their ideas out of university labs for the first time, and to see whether they can become successful ventures. The City of Chicago is also throwing its support behind the biosciences industry. Mayor Rahm Emanuel recently announced plans to construct a biotech incubator in downtown Chicago, set to open in 2014. With this additional facility, the city hopes to increase Chicago’s national profile as a biotechnology innovation hub. The new incubator will provide a space for researchers from Chicago’s top universities to collaborate with investors and partners in industry to develop discoveries from academic research with commercial potential. It will also provide a downtown presence for major biotechnology companies in the area, most of which are located in the Northern Chicago suburbs. This new incubator will be well-supported through access to professionals with business expertise, such as Chicago Innovation Mentors and ChicagoNEXT. Chicago Innovation Mentors is an organization that pairs university faculty with industry mentors to accelerate the formation of new biotechnology ventures, and its members include the University of Chicago, Northwestern University, the University of Illinois at Chicago, and iBio Institute/PROPEL. ChicagoNEXT is an organization created by World Business Chicago to accelerate innovation in clean tech, web and mobile technology, and biosciences. The new biotech is roughly modeled on 1871, an incubator for IT and digital startups in downtown Chicago. Chicago is also breeding homegrown entrepreneurial talent. The University of Chicago’s Polsky Center for Entrepreneurship offers entrepreneurship courses, a student entrepreneurship club, business plan competitions, and entrepreneurship mentoring to its students, as does Northwestern’s Farley Center for Entrepreneurship and Innovation. The University of Illinois at Chicago offers a Bachelors in Entrepreneurship, and its Liautaud Graduate School of Business offers an entrepreneurship program. While established biotech companies are by no means a new presence in Chicago, a nascent culture of innovation is emerging in the style of Silicon Valley and Boston. Perhaps with all these efforts to spark biotech innovation in Chicago, the Midwest’s “best kept secret” in biotech will be unveiled soon. (Chicago’s Skyline Image Source: Wikimedia Commons)

See original article published at the Roundtable Review

A new way to storage digital information?

Sunday, August 26th, 2012

Information, of all kinds, has been increasing since humans learned how to write in stones in ancient times. Now that there is an explosion of information, mainly digital information, we are in need of better ways to storage it in a safe and stable manner. One solution that is becoming reality is the “cloud”, a service that companies provide for information storage. Companies such as Apple, Dropbox, Google and other offer server space to store all types of digital information (books, movies, photos, etc) for a fair price. However, it looks like nature already had one simple and stable way to store data. A recent article in Science by Harvard Professor George Church (for more information see “Next-Generation Digital Information Storage in DNA”) demonstrates that indeed nature has a better way to do information storage using the DNA molecule. DNA has many potential advantages over other systems since it is very stable and most times immutable (except if exposed to mutagens that cause mutations in the bases, changing its sequence). Dr Church’s group developed a strategy that encoded the digital information of a whole printed book using a novel scheme of next-generation DNA synthesis and sequencing technologies. The group converted 53,426 words, 11 images and 1 Java Script Program into 5.27 megabit bitstream, that were encoded onto 54,898X 159 oligonucleoties (which are small pieces of DNA). After the codification the oligonucleotides were synthesized, printed and linked together to form a stable DNA molecule. The newly synthesized DNA was further sequenced to recover the information, which was done with success. This article is a milestone not just in storing information inside a biological molecule, but in showing that nature itself have smart ways to “save” and codify information. Well, DNA encode for entire cellular programs inside the cells of organisms. Importantly, we are still trying to decipher DNA’s cellular code since it is composed by different layers of information, not just the sequential bases of the “linear” DNA. Examples include chemical modifications in the DNA such as methylation or acetylation, which could change the meaning of specific regions. The fact is that DNA is very suitable for immutable, high-latency, sequential access applications such as archival storage – cells already utilize it. So, why we did not think about using DNA to store information before? Mainly because now we have all the technology to do it, before we did not. We can synthesize long stretches of DNA, print it on glass slides and also sequence it fast, since DNA sequencing technologies are evolving faster than the speed of computers. In fact, Moore’s law does not apply to DNA sequencing. The costs of DNA synthesis and sequencing have been dropping at exponential rates of 5- and 12- fold per year, respectively. This is much faster that electronic media, that is just 1.6- fold a year. Additionally, DNA has other advantages such as density, stability and energy efficiency to store information. I believe this might be just the start for digital information to be stored in the DNA molecule. Information will continue to accumulate; long-term and stable solutions like the one presented by this group will be able to store it. The take home message is that nature is teaching us – the tools are already here, we just have to learn how to use them.