Archive for the ‘Technology’ Category

Bridging Innovation between Academia and the Private Sector

Thursday, March 30th, 2017

Recent studies of Innovation have pointed to the growing relevance of external sources of innovation. Rather than relying on internal Research & Development (R&D), private companies and organizations are reported to increasingly engage in ‘Open Innovation’, especially with the Academic Sector (check more information in the article “Relationship-based university-industry links and open innovation: towards a research agenda”). This means innovation can be regarded as resulting from distributed inter-organizational networks, rather than single companies. In the same vein, various concepts of ‘interactive’ innovation have been put forward to understand the non-linear, iterative and multi-agent character of innovation processes. But what is the definition of Innovation nowadays? According to different sources, innovation could be defined as the process of translating an idea or invention into a good or service that creates value or for which customers will pay. To be called an innovation, an idea must be replicable at an economical cost and must satisfy a specific need. Innovation involves deliberate application of information, imagination and initiative in deriving greater or different values from resources, and includes all processes by which new ideas are generated and converted into useful products. In business and enterprises, innovation often results when the company uses ideas in order to further satisfy the needs and expectations of the customers. Well, we know that most of the new discoveries occur in the Academic Sector and the Private Sector, even though having R&D, lags behind in some aspects. Depending on the country this will be different too and bridging innovation licensing and technology transfer between both sectors is not trivial. Today we see several Start Up Incubators and Accelerators popping up all over the world, especially in the United States. Each one of these has a different model and duration, however most are not connected directly to Universities. In the last 5-10 years, this is changing. We are starting to see established public companies with big valuations in the technology sector and also pharmaceutical companies opening spaces for Innovation, Incubation of Start Ups and even Start Up Accelerators to foster new Ideas that could be used to create new solutions and/or products for society. These are nice initiatives. For example Johnson & Johnson has the JLABS, with the main vision of empower and enable science to reach people that needs it faster. Will they work bringing more products to the company pipeline? We will see in the near future. However, technology transfer deals between academia and the private sector are not easy. We have a few professionals that are experts in this field, even lawyers that do not understand or are not interested. But, some successful deals happened and are happening. I have been the Director of a TTO in a University for a year and I’ve seen all the problems related to doing these types of deals. TTOs help Universities to commercialize Intellectual Property and Research developed inside their Campus. My take on these problems: mostly lack of efficiency in the processes. Why? What are the main problems in tech transfer and licensing between the Universities and the Private Sector? First, the academic professor and researcher from the University lack business skills, since he was not trained for that. In this case, the University must have a Technology Transfer Office (TTO) or a similar Department that supports the Professor in these cases. Secondly, the University still has some aversion of transfering a product or solution to companies that will bring it to market. I believe this is a cultural problem. When we say University, everybody has in mind a teaching Institution without the objective of having revenue and profit (and this is not the reality, Universities need a source of revenue that is not just Student tuition and private donations). Thirdly, when both sides, the University TTO and the representatives of the private sector seat in a big “marble” Table to discuss Terms and Conditions, the University is always the weaker part even though everything started there with their own resources. That could be justifiable since after the deal most of the risk until the new product goes to market will be from the company licensing it. What the future holds for Licensing and Tech Transfer? Well, I believe that all Universities must have a TTO with well-trained people to get the best deals for the University and the Private Sector needs to change the approach when dealing with an Institution that started the whole development of some product or service of their interest. It needs to be a “win win” situation otherwise it is a waste of time and money from both sides. I believe this a good start and things are changing in Tech Transfer and Licensing. However, both sides, especially the University needs to be more “professional” and business-oriented when closing these types of deal because, in the end, they will get their piece of the money pie if everything works well with no risk in the long term in most cases.

My Trip to China

Thursday, December 29th, 2016

Last November, the Chinese Government selected five Brazilian representatives (see Image and Legend below) for the ITTN Program and I was one of them. I was selected to stay the whole month of November in China since I am the Director of Innovation and Research at UCB in Brasilia, the capital of Brazil. The selection criteria was the Innovation Stage and Evolution of the Academic Center of the Institution you are linked to and its current leadership. My Curriculum and Merit as the Director of Innovation was also evaluated. In 22 days, we had the chance to visit and do specific training in various Chinese cities, such as Shanghai, Beijing, Furzhou, Shijiazhuang, Nanjing, etc. Groups of people representing more than 15 countries such as Russia, South Africa, Belarus, Serbia, Thailand, India and others were also selected for this Training. Well, I just needed to write a Blog post dedicated to this trip since I’ve changed my vision about China. ITTN stands for “International Technology Transfer Network” and is a non-profit organization based in China that is dedicated to promoting international technology transfer and innovation services for organizations from different countries around the world. ITTN has already developed long-term partnerships with more than 400 foreign technology transfer organizations from more than 40 countries and more than 500 national organizations in China, which is a major Globalized resource collaboration network. After this trip I can say that China has changed, and a lot. Even though there are lots of problems, such as poverty, pollution, violence, etc I saw a country very focused in a strategic planning for the future, especially in sectors such as technology and Innovation. We had the chance to visit Universities, Hospitals, several Technology Parks and Company Incubators and I was blown away. The experience in all cities that we have visited was extremely positive. The country is in full growth in the productive and civil construction sectors. We had the opportunity to travel by bullet train between cities and the trains can reach up to 400 km/h. They built a huge railway network based system. We were able to see impressive work of infrastructure with technological parks connected to Shopping malls, housing and leisure area everywhere. In all the cities we have visited, there is a constant concern with the cleaning of air pollution, green energy and reduction of the emission of toxic gas, mainly carbon monoxide of the cars since the country has 1.3 billion people. China has been investing heavily in technologies to end air toxicity, which can reach extreme levels as we have seen closely in Beijing and Shijiazhuang, for example. Air pollution can be felt with a strong smell of toxic gases. Many people have to wear masks on days with extreme pollution. In the case of cities like Shanghai and Nanjing, pollution is already being better controlled by some measures that limit the number of cars per family and the use of small electric motorcycles. One fact that impressed me was the cleanliness of Chinese cities and all public transportation. My previous impression was that the Chinese cities were extremely dirty, especially the streets. This fact was demystified during my visit. In my point of view, the main highlights of this trip was China’s strategic planning to become the world’s first economy by 2020. The Chinese are bringing qualified professionals and people from around the globe to demonstrate how they will achieve this goal through: 1) Construction of Technology Parks in association with Universities in several key cities; 2) Changing the context of a country that supplies raw material, such as Brazil, to become an importer and developer of new technologies – mainly by technology transfer and licensing and 3) Strategically planning the entire scientific and technological sector together with the productive sector by building companies (mainly Spin-offs from the Academic Sector and Start Ups) in order to generate public-private partnerships. So the main highlight of my visit to China is that the Chinese people, as a nation, are making an impressive action plan focusing on specific areas of Technology and Innovation that are strategic to transform a country into a world leader. Thus, here is my advice: watch it out United States of America (USA), the Chinese are coming; fast and furious…

From right to left: Myself, Pascale from PUC-RGS, Cristina from FORTEC, Agnaldo from SEBRAE and Tais from PUC-RIO. Source: PUC-RIO’s Website.

Smart and Fast Data, not just Big Data…

Friday, April 22nd, 2016

I’ve just finished reading the book “Big Data: Using SMART Big Data, Analytics and Metrics To Make Better Decisions and Improve Performance” written by Bernard Marr. This book is a must read for Data Scientists and general people interested in data analytics. It is now clear that at the end of the day, it’s not about how much data you generate (in a scientific laboratory, in healthcare, in logistics, in the financial world, etc), it’s about how well you use it. Though Big Data has been recently deemed an over-hyped term, it’s not going to go away any time soon. I believe it will be the opposite; data science will be applied to all sectors in society. Information overload is a phenomenon and challenge we face all the time now. In fact, large-scale data analytics, predictive modeling, and data visualization are increasingly crucial for companies in both high-tech and mainstream fields to survive nowadays. Big data capabilities are a need, not a want today or tomorrow. Big Data is a broad term that encompasses a variety of angles. There are complex challenges within Big Data that must be prioritized and addressed; such as “Fast Data” and “Smart Data.” “Smart Data” means information that actually makes sense. It is the difference between seeing a long list of numbers referring to weekly sales vs. identifying the peaks and troughs in sales volume over time. Algorithms turn meaningless numbers into actionable insights. Smart data is data from which signals and patterns have been extracted by intelligent algorithms. Collecting large amounts of statistics and numbers bring little benefit if there is no layer of added intelligence and expertise. By “Fast Data” I am talking about as-it-happens information enabling real-time decision-making. An advertising firm, for example, needs to know how people are talking about their clients’ brands in real-time in order to mitigate bad messages. A few minutes too late and viral messages might be uncontainable. A retail company needs to know how their latest collection is selling as soon as it is released to continue or stop selling it. Public health workers need to understand disease outbreaks in the moment so they can take action to curb the spread. One example is Google Flu Trends detecting spikes in flu searches from Google. Twitter uses the same strategy to evaluate outbreaks of infectious diseases. A bank needs to stay abreast of geo-political and socio-economic situations to make the best investment decisions using a global-macro strategy. A logistics company needs to know how a public disaster or road diversion is affecting transport infrastructure so that they can react accordingly. One of the biggest evolutions of integrating smarter data into content experiences is that it levels the playing field with larger competitors who may have more resources to burn on advertising media. Using the totality of visitors as a whole – and deriving meaning from all of their content experiences, we can deliver more relevant and contextual experiences than our competitors. And today, we can deliver those solutions in much less expensive ways than the multi-million dollar solutions that may have historically been out of our reach. Big Data is just a problem. Smart Data is a solution that changes the game of marketing, and how we deliver better solutions for customers from this point forward. According to John Bollen, the four keys to converting big data into smart data are: 1) Organize and manage resources; 2) Identify your customers and/or targets; 3) Target this specific group of people and evaluate the outcome in real-time and 4) Use data analytics to look forward and to do forecasts. In addition, we have to remember that technology doesn’t solve the problem of changing big data to smart data. It’s more about process than technology. While tools are getting better at aggregating and parsing data, it’s ultimately up to us as data scientists to connect past behavior to future wants, preferences, needs, etc. The technological advancements seen all the way back to the Industrial Revolution have been about automating a manual process, not making us smarter. The questions without answers are “what processes are in place to handle the data?”, “what governance is in place?” (in other words, who’ll make decisions that extend from the data?), and finally, “how do we operationalize the data”? Technology won’t answer these questions. First, we must have the right people in place and processes established, then we look at how technology fits in. Smart and fast data generate reliable answers; however we need the right decision makers in the end.

Image Source: Digitalroute

The Acquisition Boom in Technology

Wednesday, March 26th, 2014

In this post, I will leave science behind and discuss the technology “craziness” that has been happening. Billion Dollar acquisitions of companies that not even have revenue or profit. The Internet revolution now is eccentric and transforms youngsters with an app idea in millionaires and billionaires from day to night.  In less than 2 months, for example, Facebook acquired Whatsapp, the Drone maker Titan Aerospace and now the virtual reality company Oculus. A mix of software and hardware and new technologies are becoming the sexiest thing to acquire nowadays. Companies with tons of cash such as Facebook, Apple, Microsoft, Google, Yahoo and others can choose what to buy and how to buy. We are living in the new era of the internet. Working in Wall Street in the  financial district getting millions in investment deals is not the big American Dream anymore. The dream is to have an idea, build a start up and sell it for millions or billions of dollars to the big ones. Or even grow it to become the next Facebook or Google. One interesting case is Whatsapp, that was sold to Facebook by around 16 billion dollars. If you think about it, this is a lot of money, but strategically, it was a bold move. Whatsapp has no revenue whatsoever or even sell adds. It is simple, just a messaging app, but the number of Users builds its value. Let’s make the math: Facebook has around 1.2 billion users and of these probably millions use Whatsapp or have an account of it. However, millions that use Whatsapp (which is probably 200 million people) doesn’t have a Facebook account. Thus, Facebook indirectly bought new users that Whatsapp has to incorporate in its platform. I use Whatsapp to message people from family, friends and colleagues and see the trend in txt messaging. It is simple, useful, and the bottom line is that it does the job. Another example is Google buying Nest for 3 billion dollars. As its own early dive into wearables with Google Glass demonstrates, Google knows it can’t miss this next big leap in hardware. And Nest provides what they want in terms of geolocation of thermostats. The last big move occurred these days with Facebook buying the virtual reality company Oculus by 2 billion dollars. This is another example of a software company buying a hardware company. Facebook believes Oculus has the potential to be the most social platform ever by enabling sharing not just moments with your friends online, but entire experiences and adventures. This might be a competitive move by Facebook towards Google Glass Platform. Since after the burst of the bubble of the internet in the beginning of 2000, we have been watching another bubble of tech internet companies popping up and acquisitions of billions occurring. It has been all about strategy and new technology, not profits or revenue increase. The companies that are in the big “club” such as Facebook, Apple, Google, Yahoo and Microsoft are positioning themselves in a competitive scenario. The only question is who is going to be successful in the long run with these acquisitions. There is a promising future in the tech sector. Are we close to the movie “Minority Report” with virtual reality? Facebook believes so. We will see.

A Digital Revolution in Healthcare?

Friday, January 24th, 2014

Recent advances in information technology have opened an exciting new frontier. A convergence is taking place between medicine and wireless technology, making it possible to change the way medicine and healthcare delivery are managed. In the coming years, this digital revolution in medicine could have far-reaching and more dramatic implications, changing the face of healthcare around the world. Mobile sensors and advanced processors are already enhancing doctors understanding of their patients signs and symptoms, leading to more personalized and effective ways to maintain health, detect problems, and treat illness. In our increasingly connected world, where more than 6 billion people have access to mobile devices, this individualized healthcare data has the potential to be collected and distributed nearly instantly, resulting in improved care for people of all ages, in every area of the world, who suffer from virtually any disease, from diabetes to Alzheimer’s to breast cancer. The two major drivers of the increased interest in digital healthcare are both the emergence of online networks (mainly because the internet is getting faster) and big data accumulation, mostly coming from medical records and scientific research (for more information see my recent article “The impact of online networks and big data in Life Sciences” at the Social Networking Journal). In fact, there are already more than 10,000 medical and healthcare apps available for download from Apple iTunes store and this is the third-fastest growing category among iPhone and Android users, showing no sign of slowing with the mobile health market estimated to grow ten times from U$1.2 billion in 2011, to U$12 billion in 2018. The major barriers to speed up the application of digitalized information in healthcare are the privacy of the data and regulation hurdles (see more at my article “Big Data in Biomedicine” published on Drug Discovery Today). However, I believe that technology always overcomes all barriers, especially when big amounts of cash are been thrown in a specific sector such as healthcare. For example, health is the number two most searched subject on the web.The mobile industry impacts in this change, mainly because almost all humans on earth now own a smartphone with internet and search capabilities.Ultimately, this revolution will be driven by “machine to machine” interactions. For this new digital era to be truly transformational, apps need to transmit information directly to healthcare providers. This will allow the healthcare systems to not only react, but to predict health outcomes based on a patient’s personal information. mHealth or mobile Health will allow communication and monitoring to take place like never before. Your body will talk to the healthcare system without you even knowing it. This new revolution started now and won’t stop. If it is to save more lives and so people could live more and better, so be it! (Image Source: Huffingtonpost)

Can Google Help Us Extend Lifespan?

Monday, October 7th, 2013

If everyone on the Forbes company list and the world’s millionaires and billionaires simultaneously (and tragically) got cancer, or Parkinson’s (or any other complex disease), the world would probably be well on its way to finding a cure for these illnesses, thanks to the enormous wealth that would be incentivized to back those efforts (check the article “WTF Is Calico, And Why Does Google Think Its Mysterious New Company Can Defy Aging?” for more information). Finding a cure for an intractable disease requires time, enormous amounts of human and financial capital, cooperation and a lot of basic and translational research – and at least a few public-private partnerships. It’s costly, it is complex and it’s messy. Nobody thinks on how to tackle these diseases and find better treatments or even a cure; the truth is that until you get sick you don’t really care. This is why Calico, Google’s newest mad science project, is potentially interesting and it was all over the news a couple of weeks ago. Calico is a healthcare company that Google is investing in that will be able to use all the crawled information in Google’s servers and databases to seek for valuable clues for treating diseases and eventually beating death. Crunching data into actionable information is Google’s future with the driverless cars, Google Glass and other “crazy” projects from the arm of Google Labs named Google [X] that Sergey Brin leads. Calico can use Google’s reach to make health data available, actionable, organize it and create a community around it, opening unprecedented opportunities for patient engagement with health data. Big Data analytics in healthcare is having a lot of attention right now and Google couldn’t be behind. Some say this is the boldest bet yet (see more at the article “Google vs. Death” in the Cover of TIME Magazine). Calico represents the company’s largest healthcare initiative since Google Health sprinted its way into obscurity and was shut down. Larry Page says he wants to solve cancer, but as a researcher in oncology for a decade, I would say this problem is more complex than the surface of it reveals. I really think it is amazing what Google has done and it is doing, and believe it takes a company with so many resources to attack and find cures for diseases such as cancer. There are many components in Google’s history indicating that they will be able to at least help: 1) they have access to tons of data points, especially published data from scientific journals; 2) Google may not solve death or help us beat it but at least it will help us understand the aging process better using our own data in the web using time as a variable; 3) at the very least it brings all kinds of attention from industry and people, especially those already sick (see more at “What Google’s Calico Means For Healthcare”).There is still a significant gap between researchers and entrepreneurs. However, at the same time, there are now more opportunities than ever before for both sides to team up to make technological breakthroughs in healthcare, and, in turn, making those breakthroughs more accessible to the general public. So, I believe that if Calico can’t give us immortality and cure complex diseases, at least it is bringing the focus of the industries to healthcare, one of the sectors that did not have many technological breakthroughs in years. Maybe we should focus our ideas and ventures in the healthcare sector like Google is starting to do. If they are successful, we all win. But if they are not, it is just another “crazy” project from Google [X] labs that failed. We will see. (Image Source: TIME Magazine)

Chicago – an Emerging Biotech Hub

Tuesday, September 3rd, 2013

Article by Charlotte Chen, Ph.D. Candidate at Northwestern University

CHICAGO – While Boston and Silicon Valley seem to be the most visible biotech hubs in the US, the nation’s best kept secret in biotech may just be in the Midwest. Indeed, in 2011, the eight-state Midwestern super cluster – Illinois, Ohio, Wisconsin, Minnesota, Michigan, Indiana, Missouri, Kansas, Iowa – employed over 120,000 more employees than either California or the East Coast (Ernst & Young, 2013), with more than 377,900 employees in over 16,800 establishments. The key player in The Midwest Super Cluster is undeniably Illinois. The state is home to more than 3,500 biotech companies that employ around 81,000 people, with a gross economic output of around $98.6 billion (Illinois Government News Network, 2013). Chicago sits at the center of all this, with some of the best medical research universities in the Midwest and a vibrant biotech industry. In the Chicago metropolitan area alone, around 45,000 employees work in about one thousand biotech companies. Some big-name biotechnology companies headquartered in the greater Chicago area include Abbott Laboratories, credited with developing the world’s first HIV blood screening test in 1985 and which now has 90,000 employees across 130 countries, and Baxter Healthcare, known for its hemophilia, renal disease, and immune disorder therapies, which brought in $10.4 billion in earnings in 2006. So why does Chicago get overshadowed by its counterpart biotech hubs on the East and West coasts? One factor could be a lack of entrepreneurial tradition in biotech. Silicon Valley and Boston have created an industry culture that embraces risk-taking, and entrepreneurs expect to fail a number of times before hitting a winning idea. While the Midwest has all the expertise to drive biotech innovation, this entrepreneurial spirit is only recently beginning to flourish. In 2012, $150 million venture dollars went into biotech startups in Chicago (City of Chicago, 2013), versus $1 billion in Silicon Valley and $860 million in Boston (MIT Technology Review, 2013). Hoping to raise Chicago’s profile in the biotech world, state and city policymakers recently established two new biotech incubators, which are set to open in Chicago within the next year. The first of these incubators is the Health, Technology, and Innovation (HTI) facility, created by Governor Pat Quinn in partnership with the University of Illinois at Chicago, with the goal of boosting Illinois’ economy. HTI is located at Chicago Technology Park in the city’s Near West Side, within the Illinois Medical District and in close proximity to the University of Chicago’s Pritzker School of Medicine, and is set to open in the summer of 2013. The State of Illinois and UIC split the bill equitably, each contributing $1.7 million for a total of $3.4 million in initial funding. HTI will provide wet and dry lab space and office space to tenants, as well as access to a network of scientists, investors, and other leaders in the biotech industry. The incubator is intended to provide a low-risk environment for academic researchers to bring their ideas out of university labs for the first time, and to see whether they can become successful ventures. The City of Chicago is also throwing its support behind the biosciences industry. Mayor Rahm Emanuel recently announced plans to construct a biotech incubator in downtown Chicago, set to open in 2014. With this additional facility, the city hopes to increase Chicago’s national profile as a biotechnology innovation hub. The new incubator will provide a space for researchers from Chicago’s top universities to collaborate with investors and partners in industry to develop discoveries from academic research with commercial potential. It will also provide a downtown presence for major biotechnology companies in the area, most of which are located in the Northern Chicago suburbs. This new incubator will be well-supported through access to professionals with business expertise, such as Chicago Innovation Mentors and ChicagoNEXT. Chicago Innovation Mentors is an organization that pairs university faculty with industry mentors to accelerate the formation of new biotechnology ventures, and its members include the University of Chicago, Northwestern University, the University of Illinois at Chicago, and iBio Institute/PROPEL. ChicagoNEXT is an organization created by World Business Chicago to accelerate innovation in clean tech, web and mobile technology, and biosciences. The new biotech is roughly modeled on 1871, an incubator for IT and digital startups in downtown Chicago. Chicago is also breeding homegrown entrepreneurial talent. The University of Chicago’s Polsky Center for Entrepreneurship offers entrepreneurship courses, a student entrepreneurship club, business plan competitions, and entrepreneurship mentoring to its students, as does Northwestern’s Farley Center for Entrepreneurship and Innovation. The University of Illinois at Chicago offers a Bachelors in Entrepreneurship, and its Liautaud Graduate School of Business offers an entrepreneurship program. While established biotech companies are by no means a new presence in Chicago, a nascent culture of innovation is emerging in the style of Silicon Valley and Boston. Perhaps with all these efforts to spark biotech innovation in Chicago, the Midwest’s “best kept secret” in biotech will be unveiled soon. (Chicago’s Skyline Image Source: Wikimedia Commons)

See original article published at the Roundtable Review

A new way to storage digital information?

Sunday, August 26th, 2012

Information, of all kinds, has been increasing since humans learned how to write in stones in ancient times. Now that there is an explosion of information, mainly digital information, we are in need of better ways to storage it in a safe and stable manner. One solution that is becoming reality is the “cloud”, a service that companies provide for information storage. Companies such as Apple, Dropbox, Google and other offer server space to store all types of digital information (books, movies, photos, etc) for a fair price. However, it looks like nature already had one simple and stable way to store data. A recent article in Science by Harvard Professor George Church (for more information see “Next-Generation Digital Information Storage in DNA”) demonstrates that indeed nature has a better way to do information storage using the DNA molecule. DNA has many potential advantages over other systems since it is very stable and most times immutable (except if exposed to mutagens that cause mutations in the bases, changing its sequence). Dr Church’s group developed a strategy that encoded the digital information of a whole printed book using a novel scheme of next-generation DNA synthesis and sequencing technologies. The group converted 53,426 words, 11 images and 1 Java Script Program into 5.27 megabit bitstream, that were encoded onto 54,898X 159 oligonucleoties (which are small pieces of DNA). After the codification the oligonucleotides were synthesized, printed and linked together to form a stable DNA molecule. The newly synthesized DNA was further sequenced to recover the information, which was done with success. This article is a milestone not just in storing information inside a biological molecule, but in showing that nature itself have smart ways to “save” and codify information. Well, DNA encode for entire cellular programs inside the cells of organisms. Importantly, we are still trying to decipher DNA’s cellular code since it is composed by different layers of information, not just the sequential bases of the “linear” DNA. Examples include chemical modifications in the DNA such as methylation or acetylation, which could change the meaning of specific regions. The fact is that DNA is very suitable for immutable, high-latency, sequential access applications such as archival storage – cells already utilize it. So, why we did not think about using DNA to store information before? Mainly because now we have all the technology to do it, before we did not. We can synthesize long stretches of DNA, print it on glass slides and also sequence it fast, since DNA sequencing technologies are evolving faster than the speed of computers. In fact, Moore’s law does not apply to DNA sequencing. The costs of DNA synthesis and sequencing have been dropping at exponential rates of 5- and 12- fold per year, respectively. This is much faster that electronic media, that is just 1.6- fold a year. Additionally, DNA has other advantages such as density, stability and energy efficiency to store information. I believe this might be just the start for digital information to be stored in the DNA molecule. Information will continue to accumulate; long-term and stable solutions like the one presented by this group will be able to store it. The take home message is that nature is teaching us – the tools are already here, we just have to learn how to use them.

Bioinformatics now! It is time to learn computer coding

Monday, July 30th, 2012

The science of bioinformatics or computational biology is increasingly being used to improve the quality of life as we know it. Bioinformatics has developed out of the need to understand the molecule of DNA, also known as the code of life. Massive DNA sequencing projects became a reality with the advent of next generation technologies and has added in the growth of the science of bioinformatics. DNA, the basic molecule of life, together with other layers of information in the cells, directly controls the fundamental biology of life. It codes for genes (both protein-coding and non-coding) that act in concert with some environmental factors to determine the biological makeup of humans or any living organism. It is variations and errors in the genomic DNA (such as mutations and polymorphisms), which ultimately define the likelihood of developing diseases or resistance to these same disorders. This way, the ultimate goal of bioinformatics is to uncover the wealth of biological information hidden in the mass of sequence, structure, literature and other biological data obtaining a clearer insight into the fundamental biology of organisms and to use this information to enhance the standard of life. The science of bioinformatics grows together with computer science since the more breakthroughs the better for both fields. Recently, with the explosion of genetic data and genomics information, bioinformatics became an essential tool to life sciences. In other words, the need to deal with complex types and sources of data from living organisms is increasing the importance of computer science for biology. Never in history, computers and biology were so close to each other. I believe that, in the future, every laboratory from academic and private sectors will need to have a group dedicated to bioinformatics. This is already happening in several institutions and companies. Genomics and the projects related to it are adding amounts of data that our brain cannot process. Big challenges will be faced in order to facilitate the interconnection between computer science and biology. Computer scientists have no biology or medical training and biologists know little on the language used by “computer coders”. Thus, to be a multitasking scientist today, an individual needs lots of computer skills. And by computer skills I do not mean just knowing how to use a computer. A deeper knowledge is needed to deal with big amounts of data being generated by genetics and genomics right now.

How new web technologies and innovations are impacting our society

Sunday, January 29th, 2012

The blog post today will focus on the impact that novel web technologies such as social media and search engine tools (i.e.: Google) have in our society. These tools have been used to track diseases and to better understand different trends online. If you are like millions (or billions) of people around the world that can access the internet, you somehow engage in internet-enabled self-diagnosis for diseases (among other types of queries) using search engines and other web tools. In today’s hyper-connected world, these tools are not just used to do self-diagnosis or track diseases in populations but to evaluate different topics that are discussed in social media (for example, trend topics in social medias such as Twitter, Facebook and others). First let’s analyze how a search engine can be helpful in predicting epidemic outbreaks of a disease caused by viruses or bacteria. Google Inc. did a very interesting study to evaluate if their search engine was able to detect areas that an influenza epidemic could be eminent (for more details see the article “Detecting influenza epidemics using search engine query data” on Nature). The study was very straightforward since it evaluated queries that were done at Google using keywords containing anything related to influenza. Using this approach, they were able to track areas that could be having an epidemic of influenza. Their results were compared to the CDC (Center for Disease Control, a United States government organization) in terms of which of them could get areas affected earlier and Google’s algorithm won. Google web search logs could provide one of the most timely, broad-reaching disease monitoring systems to date. Whereas traditional monitoring systems used by CDC and the government require 1-2 weeks to gather and process the data, Google’s estimates are in a daily basis. Now both Google and CDC work together in any disease outbreak showing how this tool can help the government in tracking disease outbreaks. The second tool that is revolutionizing various sectors in our society is social media. Two recent studies were able to show, using the Twitter platform, that we can monitor trends based on the posts of users. PacSocial, which is a corporation that is focused on the development of technologies that enable large-scale shaping of social communities online was able to use “socialbots” to influence connections and interactions between two users (see “PacSocial: Field Test Report” for more details). They were able, for the first time, to show how tools such as “socialbots” can influence human connection online. This study is crucial for a better understanding on how social media is shaping our society. Another study focused on Twitter users designed by professors from MIT (see “Modeling the adoption of innovations in the presence of geographic and media influences” recently published in PLoS ONE) examined the effects of social network structure innovation adoption creating a model based on geography and mass media. The authors showed that mass media was responsible for increasing Twitter users with time. Interestingly, the authors also show how powerful is the mouth-to-mouth spread of these medias between the young. Cities with the most early adopters of Twitter tended to have large universities or technology centers attracting younger people. Twitter was launched in the Bay area in San Francisco and the other users started to come from the east cost, basically Cambridge and Boston where both MIT and Harvard University are located. These reports on new technologies influencing society and decisions, especially in epidemic outbreaks, show how empowering search engines and social media have become to our society. I suppose that several other scientific studies, especially using the Facebook platform that now have more than 800 million users will be done. I hope that these new web tools will help our society in different ways, especially for disease monitoring. This could be important to understand the trends that are influencing our society online. (Image credits: Sociology World)