key: cord-0001444-wwvogjw8 authors: Dollery, Colin T title: Lost in Translation (LiT) date: 2014-04-11 journal: Br J Pharmacol DOI: 10.1111/bph.12580 sha: bf93f4761478782d3d8f51773be85c34a7f7f9e4 doc_id: 1444 cord_uid: wwvogjw8 Translational medicine is a roller coaster with occasional brilliant successes and a large majority of failures. Lost in Translation 1 (‘LiT1’), beginning in the 1950s, was a golden era built upon earlier advances in experimental physiology, biochemistry and pharmacology, with a dash of serendipity, that led to the discovery of many new drugs for serious illnesses. LiT2 saw the large-scale industrialization of drug discovery using high-throughput screens and assays based on affinity for the target molecule. The links between drug development and university sciences and medicine weakened, but there were still some brilliant successes. In LiT3, the coverage of translational medicine expanded from molecular biology to drug budgets, with much greater emphasis on safety and official regulation. Compared with R&D expenditure, the number of breakthrough discoveries in LiT3 was disappointing, but monoclonal antibodies for immunity and inflammation brought in a new golden era and kinase inhibitors such as imatinib were breakthroughs in cancer. The pharmaceutical industry is trying to revive the LiT1 approach by using phenotypic assays and closer links with academia. LiT4 faces a data explosion generated by the genome project, GWAS, ENCODE and the ‘omics’ that is in danger of leaving LiT4 in a computerized cloud. Industrial laboratories are filled with masses of automated machinery while the scientists sit in a separate room viewing the results on their computers. Big Data will need Big Thinking in LiT4 but with so many unmet medical needs and so many new opportunities being revealed there are high hopes that the roller coaster will ride high again. Sir Colin Dollery is a member of the Clinical Translational Pharmacology Group of the Nomenclature Committee of the International Union of Basic and Clinical Pharmacology (NC-IUPHAR). The role of this group is to provide advice on the translational aspects of receptor/target pharmacology, and to translate activity at drug target sites to clinical efficacy. Further information can be found on the IUPHAR/BPS Guide to PHARMACOLOGY website (http://www .guidetopharmacology.org/). This has recently been updated to include curated information on all data-supported targets of approved drugs, their interacting ligands and expert comments on clinical efficacy. Translational medicine is a roller coaster with occasional brilliant successes and a large majority of failures. Lost in Translation 1 ('LiT1'), beginning in the 1950s, was a golden era built upon earlier advances in experimental physiology, biochemistry and pharmacology, with a dash of serendipity, that led to the discovery of many new drugs for serious illnesses. LiT2 saw the large-scale industrialization of drug discovery using high-throughput screens and assays based on affinity for the target molecule. The links between drug development and university sciences and medicine weakened, but there were still some brilliant successes. In LiT3, the coverage of translational medicine expanded from molecular biology to drug budgets, with much greater emphasis on safety and official regulation. Compared with R&D expenditure, the number of breakthrough discoveries in LiT3 was disappointing, but monoclonal antibodies for immunity and inflammation brought in a new golden era and kinase inhibitors such as imatinib were breakthroughs in cancer. The pharmaceutical industry is trying to revive the LiT1 approach by using phenotypic assays and closer links with academia. LiT4 faces a data explosion generated by the genome project, GWAS, ENCODE and the 'omics' that is in danger of leaving LiT4 in a computerized cloud. Industrial laboratories are filled with masses of automated machinery while the scientists sit in a separate room viewing the results on their computers. Big Data will need Big Thinking in LiT4 but with so many unmet medical needs and so many new opportunities being revealed there are high hopes that the roller coaster will ride high again. This article is a broad brush, personal review of the evolution of translational medicine over the past 60 years and a glimpse into its future. Translational medicine has many different meanings, ranging from re-branding of the clinical component of clinical pharmacology to the whole ebb and flow of discovery from science to patient care and back. The NIH definition of translational research visualizes it as a bidirectional flow in which research findings are moved to and from the researcher's laboratory to the patient's bedside (Zerhouni, 2005) . The MD Anderson slogan from 'bench to bedside and back' is similar but more concise. Neither fully conveys the multidisciplinary environment from chemists to physicianscientist and on to widespread use in the community, although the NIH gets close. At least six other disciplines, sharing fully in a drug development project, are needed to fulfil the hope of translating an idea into a useful therapeutic agent. Nor do they grasp how much the world of drug discovery and development, for both science and medicine, has changed over the last 60 years and is still changing. The very high incidence of failures >90-95% illustrates the complexity and difficulty at every stage. This article attempts to examine how translational medicine has progressed and stumbled over this period and the challenges it faces in the future. A better way of defining translational medicine might be 'from basic science to maintenance of good health'. To many people, translational medicine conjures up a picture of a physician being given a supply of a new medicine from a pharmaceutical company to conduct a clinical trial to test for activity in a disease. Interaction between pharmaceutical R&D and physicians in clinical practice has never been quite as informal as that, but during the rapid expansion of drug discovery in the late 1950s, direct contact between drug discovery teams and picked clinical investigators was the norm. It was a remarkably successful period and one that is called, in this article, 'Lost in Translation 1, (LiT1) the Golden Years.' It has important lessons. The second period, LiT2, with the subtitle, 'Genes, automation and brickdust', was one of the rapid expansions by the pharmaceutical industry. It saw the birth of highly automated high-throughput screening (HTS) of millions of molecules, more detailed regulatory review, the spread of clinical trials throughout the world and the growth of very large contract research organizations. There were successes from this approach but fewer than had been anticipated and the costs soared. Direct contact between industrial research teams and the clinical investigators carrying out trials, even early phase trials, waned. Translational medicine now progressed through a complex of industrial, regulatory, international review board/ethics committee, contract research organizations (CRO) and clinical bureaucracy and, most recently, drug purchasing agencies. It was more like a children's game of pass the parcel than a translation. An important loss was that physicians in industry writing trial protocols often had little direct personal experience of the conduct of trials and the clinical investigators in the field had less and less contact with the detailed, and high quality, scientific work done in industry. This was particularly unfortunate for early phase studies that industry often called, 'Proof of Concept'. While automation of compound discovery in LiT2 was very disappointing, there were still major positives, particularly kinase inhibitors and breakthrough medicines for HIV. The third phase, LiT 3, was launched with major successes with monoclonal antibodies (mAbs) and a flood of 'Big Data' genomic and biomarker scientific information. These advances have been counterbalanced by increasing concerns about the cost of medicines and critical assessment of benefit-risk value by the government agencies that purchase them. These problems should be manageable, but if they are not, we may stumble into LiT4 in a vast cloud of new data that have not led to the medical advances that might have been achieved. The key decision in drug discovery is the choice of a drug target. The way of choosing and interacting with the target for a medicine has shown enormous changes over the past 60 years, but the central role of the target has not. The factors that must now be considered under the heading translational medicine range from chemistry through protein therapeutics, biology, medicine, patients, adherence to medicines, regulators, purchasers and the rising importance of Big Data. It starts simple and becomes increasingly complicated as LiT passes from stage 1 to 4. The choice of target remains the pivot, but it may be a pathway or a control system, not a single molecule. The concept that translational medicine starts when the pre-clinical teams offers a molecule for testing for the first time in human (FTIH) and ends when it is marketed as no longer appropriate. Translational medicine in the LiT3 era begins with the choice of target, not with FTIH and does not end until many years later when both physicians and patients have understood how to use it safely and effectively in the real world. In the golden years of drug discovery, from the 1950s to the late 1970s, the targets were often GPCRs, or their agonists. Physiologists had already shown that a number of agonists had an important role in organ function, for example, noradrenaline, angiotensin II, histamine, dopamine and 5-HT. Drug molecules often began as analogues of known agonists. From this work grew most of the antihypertensive, antidepressants and antimetabolites, among others. The contribution of scientists like James Black, George Hitchings and Paul Janssen (Van Gestel and Schuermans, 1986; Black, 1996; Hitchings, 2003) was massive but the earlier work of cardiovascular, respiratory and gastrointestinal physiologists and biochemists often gave them important clues about tissues to use for assays in drug development and the potential clinical uses. A major advantage was that it was possible to measure an effect on BP, airway resistance, gastric acid secretion and white blood cells in pre-clinical species and in man within a short time after administering the drug. There were also great advances in antibiotics with β-lactams, streptomycin, etc. This is not to underestimate the very close personal collaboration possible between chemistry (C), biology (B) and medicine (M) that was greatly valued by all parties and created a spirit of intense motivation and excitement. Translation is always easier if you can see the road ahead and test it, sometimes in a small-scale, carefully designed and documented, clinical study. A simple way of expressing that for teaching is by the following formula: To attack the target T C B M T ( ) = : * * . As we passed from LiT2 to LiT3, that formula became progressively more complicated. Rapid advances in pharmacology and medicinal chemistry provided the tools to begin subdividing agonist and antagonist responses. In 1948, Ahlquist (1948) suggested a subdivision of β-adrenoceptors but he had great difficulty in getting his work published. It was developments in medicinal chemistry and pharmacology in the 1960s and 1970s that made it possible to test a wide range of agonists and antagonists on different tissues. Differences in the rank order of responses in different tissues provided strong evidence for the presence of different receptors. This enabled James Black and his team to produce selective antagonists for cardiac β-receptors and for gastric histamine receptors. In doing so, they aided patients with angina pectoris and virtually abolished surgical treatment of peptic ulcers. These successes had such a radical effect in transforming clinical medicine that the terms of reference of drug discovery in the Golden Years also had to change. It was the birth of very large outcome trials. The evolution of the treatment of high BP took 35 years to evolve from the first treatment of the most severe cases to almost universal treatment of mild increases in BP. In 1950, the most severe form of hypertension (grade IV or malignant hypertension) had the same life expectancy as lung cancer. Following the work of Paton and Zaimis with pentamethonium and hexamethonium in 1950, Sir Horace Smirk in New Zealand treated 53 patients with very severe hypertension with these ganglion blocking drugs, for periods of 2-14 months (Doyle, 1991) . He reported dramatic reversal of some of the most severe clinical features and this was the trigger for discovery after discovery of new and better tolerated, BP lowering agents for the next 20 years. From that followed progressive extension of treatment to less severe elevations of BP. This culminated in the Medical Research Council (MRC) trial in mild hypertension that began in 1973 and was published in 1985 and had nearly 80 000 patient-years of treatment. The principal result was a marked reduction of stroke, 60 in the treated group and 109 in the placebo group (Medical Research Council Working Party, 1985) . Today, hypertension is usually managed successfully by family doctors and the most severe forms have virtually disappeared in developed countries. The statins were a serendipitous discovery by the Japanese microbiologist, Akira Endo, in a fermentation broth of Penicillium citrinum while seeking novel antibacterial compounds. The active substance, 'compactin', had no antibacterial activity but was a potent inhibitor of HMG-CoA reductase, the rate-limiting step in the cholesterol biosynthesis. Compactin lowered cholesterol in several species but not in the rat. It was developed by Sankyo in Japan and used in patients who were heterozygotes for familial hypercholestrolaemia. In 1978, Merck Research Laboratories found another potent inhibitor of HMG-CoA reductase in a fermentation broth of Aspergillus terreus. This was named mevinolin, later the official name was lovastatin (Tobert, 2003) . Sankyo ran into safety problems with compactin in 1980 and Merck discontinued their clinical work for 2 years because of the close resemblance of lovastatin to the structure of compactin. When the studies restarted, lovastatin demonstrated large reductions of LDLcholesterol but there was controversy about safety and only limited uptake for clinical use. The turning point was the Scandinavian Simvasatin Survival Study (4S). For over 5.4 years of treatment in 4444 patients, 28% of patients on placebo had a major coronary event and only 19% on simvastatin (P < 0.00001) (Pedersen et al., 1998) . The major safety problem with statins is rhabdomyolysis and one, cerivastatin, was withdrawn due to the number of serious events including deaths. An expression quantitative trait locus (cis-eQTL) for the gene glycine amidinotransferase that encodes the ratelimiting enzyme in creatinine synthesis has been identified as a marker of simvastatin-induced myopathy. This locus was associated with incidence of statin-induced myotoxicity in two separate populations (Mangravite et al., 2013) . Many statins are transported into the liver by the organic acid transporter, OATP1B1, and a reduction in function polymorphism increases the risk of rhabdomyolysis (Romaine et al., 2010) . The original synthesis of acetylsalicylic acid was made by a French scientist, Charles Gerhardt, in 1853, but in 1899, Felix Hoffman at Bayer rediscovered acetylsalicylic acid and tried it on his arthritic father who had found salicylic acid upset his stomach. The arthritis was relieved by the much more potent acetyl derivative, and for pain, the rest is history (Vane et al., 1990) . Initial estimates by Bayer were sales of less than 100 kg·per year. Aspirin production today is estimated to be about 50 000 tons·per year. Much later, academic investigators discovered the effect of aspirin in reducing platelet aggregation by inhibiting Tx formation (Weiss et al., 1968) and even later demonstrated that low doses of aspirin were sufficient to maintain an anti-thrombotic effect because platelets could not resynthesize the COX enzyme (Patrono et al., 1980) . The largescale simple trials pioneered by Peto and Collins played a large part in demonstrating the value of aspirin in secondary prevention of cardiovascular events (Hennekens et al., 1989; Baigent et al., 1998) , but in primary prevention, the benefitrisk results were more equivocal due to the incidence of gastrointestinal and cerebral bleeding (Bartolucci et al., 2011) . The large hypertension and statin trials, and widespread use of low-dose aspirin, with the demonstration of substantial reductions in stroke and myocardial infarction were, ultimately, the main factors that broke the bank of pension funds in the Western world, with statins becoming the most widely used drugs. The changes in scale and duration of clinical trials, the cost, operational support, investigator training and data handling support needed on the clinical side and the increasing scale of pharmaceutical companies involved in this type of research caused the James Black model to shrink and complex, multifaceted CROs evolved, serving the pharmaceutical industry by conducting the clinical phase of drug development. The Golden Years were not free from safety problems and two major safety disasters occurred in this period, phocomelia with thalidomide in 1962 (Lenz, 1988) and the oculomucocutaneous syndrome with practolol (Wright, 1975) . Aplastic anaemia with chloramphenicol, phenylbutazone and oxyphenbutazone led to their withdrawal or severe limitations on their use in the mid-1970s. Severe liver toxicity with halothane on second exposure, suggesting an immune mechanism, was a major concern and attracted increasing attention from anaesthetists and hepatologists (Neuburger and Kenna, 1987) . The thalidomide crisis in 1961-1962 led to much greater emphasis on safety assessment both preclinically and clinically. The UK Committee on Safety of Medicines (CSM) was founded in 1963. The early warning cards the CSM circulated were coloured yellow, as a reminder to doctors that liver toxicity was a relatively common adverse effect of drug treatment (Dunlop, 1977) . This was the start of regulatory review of protocols and safety data before administration of new drugs to man that was to become progressively more complex with time. Pre-clinical and clinical, safety organizations in industry were greatly expanded. Small-scale, interpersonal, translational medicine, 1950s and 1960s style, withered. A mighty engine was to replace it. In this era, targets and molecules multiplied like a horde of locusts. The progressive decoding of the human genome was accompanied by rapid expansion of the number of molecules that could be synthesized and use of highly automated systems for screening their pharmacological activity. The true believers in LiT2 believed that a cascade of new drugs would fall off the end of the automated HTS systems. The idea of making cDNA copies of mRNAs in vitro and amplifying them in bacterial plasmids goes back to the 1970s but the term expressed sequence tag (EST), was used from 1991. Isolation and sequencing of an EST made it possible to pull out complete genes (Adams et al., 1991) . The explosion of potential targets disclosed by decoding the whole human genome in 2002 offered a cornucopia of potential targets, 360 GPCRs, 500 kinases, 400-500 proteases, 200-300 transporter molecules but, initially, there was relatively little data on their role in physiological and biochemical systems and even less in common diseases. The industry's answer was HTS and combinatorial chemistry to produce a very large number of compounds. HTS had its origin in natural product screening by 1986 by Pfizer and others. By the early 1990s, HTS, plus combinatorial chemistry, was being promoted as the solution to improve productivity in drug discovery. Companies invested heavily in highly automated facilities and in assembling large compound libraries. HTS methods were later adapted to measure some drug metabolism and safety targets. As there were often no known natural ligands to be used as a template for drug discovery on the novel targets, the reaction of industry was to synthesize very large numbers of chemical entities to feed the HTS. The term 'Combichem' was born. The 'pool and split method' involved attaching the starting compounds to polymer beads, then splitting into 50 groups and reacting them with a second set of reagents. Highly automated parallel syntheses that produced singlet molecules with known structures largely superseded this approach, but the aim remained to produce enormous libraries to feed the screens. HTS initially used ligand binding assays, although later cell-based assays were widely adopted. The problem was that the HTS 'hits' with the highest binding to the target were often relatively large lipophilic molecules. The chemists working on 'hit to lead' and 'lead optimization' were more likely to add groups than subtract them. The end result was often a highly lipohilic, high MW molecule with very low water solubility at the pH of the small intestine. Experience showed that such molecules are difficult to develop for a number of reasons, including low bioavailability because of low water solubility and a substantial failure rate in pre-clinical toxicology and clinical safety assessment due to off-target effects. Developing a drug molecule to a point where it can be administered to a human for the first time involved many disciplines: hit to lead and medicinal chemistry, scale-up chemistry to simplify syntheses, biology, pharmacology and bioassay development, drug metabolism and pharmacokinetics, and safety assessment in more than one pre-clinical species. Input from discovery medicine was usually only sought at the time of a candidate molecule selection or, more often, when the molecule was deemed ready to be administered to man for the first time. The need to improve the quality of molecules to improve compound developability properties was led by Christopher Lipinski of Pfizer (Lipinski, 2004) who proposed his rule of five, the most important components being a MW of <500 and c log P < 5. Subsequent extensive studies of compound failure rates led to most companies limiting the acceptable MW (<450 Da) and lipid solubility of candidate molecules (c log P < 3). Somewhat later, it was realized that cell membrane permeability and good water solubility at the pH of intestinal fluid were very important. Keeping the oral drug dose low (<50 mg) to minimize off target and idiosyncratic effects was also recognized as important. The long time lag in the pharmaceutical industry between selecting a lead chemical molecule and its being marketed (typically 10-12 years) meant Lipinski's advice was slow to have a major impact on medicines in late development. This author has on his desk a mouse mat with the heading, 'You can make it', beneath it is a large picture of a red brick and the note below it, saying 'Can we develop it?'. It serves as a constant reminder that molecules with brick dust-like water solubility are at a very big disadvantage. The development phase of new medicines also increased in complexity. Early studies to demonstrate proof of concept (PoC) were now carried out by CROs. They are very competent at following a protocol but largely prevented industry and drug discovery scientists and physicians from gaining practical experience. The pharmaceutical industry staff writing protocols often had limited personal experience of conducting early studies in man and very limited contact with the investigators undertaking them. Larger and larger clinical trials, some including tens of thousands of patients, required more and more study centres in more and more countries. This involved an assumption that a disease under study with the same diagnostic label in different countries is almost identical to the disease in the USA, Western Europe or Japan, where almost all new drugs originate. This premise also involves an unstated assumption that drug response and metabolism (Daly, 2012) and background medical care are very similar throughout the world, but this is not always true. Research on idiosyncratic drug toxicity has shown strong relationships to specific human leucocyte antigen (HLA) groups (Spraggs et al., 2012) and the frequency of the many different HLA groups varies around the world. The problems of developing a worldwide safety and efficacy profile of a new medicine are usually manageable but they are an additional source of variability in drug efficacy and safety. Despite all these problems, the industry still managed to produce some very important new medicines in LiT2. The successes of LiT2 were mostly when a specific target largely responsible for driving a disease was identified and it was accessible to intervention and engagement with a new molecular entity (NME) or a mAb. Where there has been little progress was when the target identification was uncertain or it was not accessible to any current intervention. Ageing patients with damaged or failing organs, or malignant tumours, began to dominate the search for new targets. The difficulty in identifying a tractable target, epitomized by the failure of very large trials in Alzheimer's disease, for example, bapineuzumab (Salloway et al., 2014) , was the end game for LiT2. There had to be a fundamental rethink, particularly about the need for earlier identification and intervention in chronic diseases and the problems of older patients who were often being treated for more than one medical problem. Intracellular signalling pathways were a new target class arising from the identification of a large number of kinases in the human genome. One of the most important advances during this period was the discovery, analysis and classification of the role of protein kinases on critical signalling pathways within cells (Manning et al., 2002; Hunter, 2009) . The kinome posed a major challenge to medicinal chemists as their target was the ATP binding pocket that is highly conserved in the human protein kinase family (Liao, 2007; Muller, 2009 ). It is a great credit to medicinal chemistry that it proved possible to produce reasonably selective low MW inhibitors of these kinases, although the degree of specificity that was possible in designing small molecules to interact with GPCRs has rarely been achieved with kinase inhibitors. Kinase inhibitors have become very important in therapeutics starting with imatinib and progressing through several VEGF inhibitors, the serine/threonine-protein kinase B (BRAF) and mitogen-activated protein kinase kinase (MEK) inhibitors in patients with malignant melanoma patients with tumours with the V600E mutation in BRAF and most recently the Janus kinases JAK1 to JAK 3. A major contribution to translational medicine with kinase inhibitors was the measurement of target phosphoproteins in tumour biopsies. Without this information, it was necessary to carry out clinical trial lasting months, with the endpoint being reduction of tumour size on a radiograph using the relatively crude method known as RECIST (response evaluation criteria in solid tumours). The crystal structure of the marketed kinase inhibitors with the drug molecule in situ has been solved and modelling techniques and experimental work are pursuing noncompetitive inhibitors (Schnieders et al., 2012) . The ability to bring together genome analysis of kinases and their mutations with molecular biology, cell biochemistry, crystallography, fragment-based drug design and direct measurement of kinase inhibition in biopsies from the human tumour was a reflection of the growing power of translational medicine at the convergence point of many disciplines (Schwartz and Murray, 2011) . The greatest therapeutic achievement during LiT2 was the remarkable advances in the treatment of the human immunodeficiency virus infections (HIV). From a deadly disease, with no effective treatment, anti-retroviral therapy has achieved virtually complete viral suppression in most patients, restored immune function and seems likely to give therapy-adherent patients close to a normal lifespan (Delaney, 2006) . The remarkably rapid progress achieved was due to two main factors: (i) intensive efforts across the pharmaceutical industry, leading, later, to collaborations in developing highly active drug combinations and (ii) availability of rapid virus genome sequencing to track new mutations. HIV-1 genome sequences are critical for the discovery of drug-resistant mutations and play an important part in clinical trials (Gall et al., 2012) . High-throughput deep sequencing of the HIV genome has been used to investigate the evolutionary dynamics of HIV-1 during the early stages of acute infection (Henn et al., 2012) . The use of massively parallel sequencing of small samples to reveal the whole genome of the infecting agent is an important step forward in translational medicine that has potential applications well beyond HIV virology, for example, the SARS virus and H7N9 influenza, vaccine production, epidemiology and oncology (Malboeuf et al., 2012) . The pharmaceutical industry has produced more than 25 compounds active against HIV for clinical use. These fall into seven categories: nucleoside reverse transcriptase inhibitors, nucleotide reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors, protease inhibitors, cell entry inhibitors, co-receptor inhibitors and integrase inhibitors (De Clercq, 2009 ). These formed the basis of highly active antiretroviral therapy (HAART) combinations consisting of three or more potent anti-HIV drugs (Davey et al., 1999) , commonly reverse transcriptase inhibitors and protease inhibitors (Katlama et al., 2013) . There is concern about mechanismbased, very long-term, mitochondrial toxicity with the use of the nucleoside and nucleotide reverse transcriptase inhibitors thought to be the causal factor in late onset pancreatitis and peripheral neuropathy (Cote et al., 2002) . Lipodystrophy is also a relatively common problem but with no clear idea of the mechanism. However, most patients still take HAART medicines because of their efficacy and many are alive and reasonably well 20 years after the diagnosis of HIV and the viraemia is suppressed, although not cured. A very important lesson for translational medicine was born with HIV -patient power. Many of the patients who were first to contract HIV were young, intelligent, vocal and influential homosexuals. They were not going to see their life drain away with cachexia, Kaposi's sarcoma and multiple infectious illnesses, if they could do something about it. Intense patient pressure was applied for rapid approval of new medicines with demonstrations outside pharmaceutical research companies. Later, campaigns to make HIV drugs available in poor countries, particularly in Africa, had a high impact on companies, regulators, governmental and international agencies, medical foundations and charities. Pressure for more rapid translation when the first glimmers of hope comes for an effective new treatment in a serious disease, even if it is rare, will be a fact of life for the future. The Food and Drug Administration (FDA) now has a 'breakthrough' accelerated review process for highly promising new agents. Blogs for discussion among patients taking part in clinical trials, or groups of patients with a serious disease, are becoming frequent and can be very informative. Translational medicine does not end in a doctor's office. In the new era, an important part begins there. Those who care most about discovering and testing new drugs are the patients who need them. The development of medicines during LiT3 has great achievements but at the same time is much more complicated than it was in LiT1 or 2. Discovering new drugs has got harder and there is much greater emphasis on safety of medicines and their cost. The additions to the simple formula of LiT1, C*B*M = T, includes BD for Big Data, Pr for protein therapeutics, R for regulators, Pt for patients and two Ss, Sa for safety and So for the sociology (adherence) of translational medicine and last, but not least, Pu for purchasers. The whole profile of populations and clinical medicine is changing largely due to the successes in treating cardiovascular disease. The UK national statistics showed that of each 100 000 male cohort in 1975, 1975 had reached the age of 90, in 2000 there were 9213 nonagenarians, and in 2010, it had increased to 16 210. These changes were, in large part, due to medical advances. The population is ageing but there is now an expectation of reasonably good health into the 80s, albeit often sustained by more than one form of pharmaceutical therapy. Developing new treatments for control of complex systems with drug combinations is well developed in virology and, to some degree, in oncology but the regulatory and clinical trial complexity has daunted companies from attempting to develop two or more novel molecules simultaneously. There have been three great landmarks in genomics: the first was the completion, in 2003, of the HGP and the discovery of 20 000-25 000 protein coding human genes (Clinton and Blair, 2000) , later the search for associations of genetic polymorphisms with human disease (GWAS) (The Wellcome Trust, 2009) and the ENCODE project (Maher, 2012) , which revealed the importance of large regions of the genome previously dismissed as 'junk'. The HGP concentrated on the 3.3% of the human genome that codes for proteins and ENCODE (about 20% of the remainder) that has opened a vision of the very complex processes by which the protein coding sequences are controlled by transcription factors, enhancers, etc. ENCODE will provide an opportunity to re-run genome-wide association studies by linking single nucleotide polymorphisms (SNPs), associations to transcription factor sites and locus control regions that are sometimes >300 kb from the gene's promoter. For translational medicine, ENCODE has the potential to yield the greatest insights when it comes to modifying disease processes, provided that the mass of data can be interpreted. ENCODE is not the whole story as there is another 40% of the genome that is transcribed but whose function is not known. There have been critical reviews of the ENCODE project (Graur et al., 2013) but the basic premise that the regulation of the protein coding region lies with large regions of the genome outside the coding region has been established. The concept of using antibodies in therapeutics is not novel but the impact of recent developments on medicine has been enormous (Buss et al., 2012) . The driver from this development was not just biotechnology but the rapid development of immunology, which revealed multiple targets in cytokines, chemokines, B-cells and T-cell sub-types, all accessible to injected proteins. mAbs are large protein molecules with a very high affinity for their protein or peptide targets and which can bind to targets that are too complex for a small agonist or antagonist molecules to engage. Their affinity for targets is often two logs more than equivalent low MW drugs. mAbs are glycoproteins, usually of the IgG family, with a MW of about 150 kDa and are made up of two heavy and two light chains. The 'Y'-shaped structure is joined at the hinge region by a number of disulphide bonds at the junction of the Fab (fragment antigen binding) and Fc (fragment crystallizable) domains. The antigen-binding sites on the Fab domain are known as complimentarity-determining regions made up of six polypeptide segments. The crystal structure of antibody proteins has been characterized (Edmundson et al., 1993) . The first therapeutic mAb, OKT3, an anti-CD3 mAb, was registered nearly 30 years ago, but it was the work of Kohler and Milstein (1975) and subsequent developments in this technology that transformed the field. There are now over 20 mAbs on the market and some have enjoyed enormous clinical and commercial success. The history of these developments is described in a recent review (Buss et al., 2012) . Early hybridoma monoclonals were murine-derived, using antibodies from mice immunized with the target molecule and fused with a murine myeloma. The mouse antigen was a substantial disadvantage for use in man because of immunological reactions to the foreign (non-human) protein, formation of human anti-mouse mAb antibodies and a relatively short half-life. These problems were minimized by creating chimeric mAbs by grafting the antigen-specific variable domain of a mouse mAb onto the constant domains of a human antibody using biopharmaceutical techniques. This technology did not entirely overcome the problem of anti-mAb antibodies in man and the next step was to graft only the hypervariable, target antigen specific, region from a mouse antibody onto a human antibody framework. This further reduced the antigenicity, but immunogenicity in man remained a problem. The final step was to create fully humanized mAbs using phage display technology and transgenic mouse strains expressing human antibody domains (McCafferty et al., 1990; Lonberg et al., 1994) . Although more attention has been given to the hypervariable regions, the Fc domain has also been modified to vary the half-life of the molecule and modify the engagement with the Fc γ-receptor on immune cells. Partly or fully humanized mAbs now dominate therapeutic antibody development in many areas of immunology. Progress has continued. Antibodies with dual specificity have been generated by both biochemical and biopharmaceutical means. Dual targeting of EGFR and IGF-1R has been pursued actively and other oncology examples include dual affinity for HER2 and HER3. Dual targeting in a single mAb has also been employed in inflammation and in fungal infections (Kontermann, 2012) . A heavy/light chain pairing targeting mouse CD3 and human EpCAM, expressed on adenocarcinomas, was able to kill tumour cells very efficiently, at low picomolar concentration (Chames and Baty, 2009 ). There is increasing interest in domain antibodies (Holt et al., 2003) . These are the smallest known antigen-binding fragments of antibodies and consist of a peptide chain of about 110 amino acids. They have both diagnostic and therapeutic potential (Even-Desrumeaux et al., 2012) . The basic concepts of small-molecule pharmacology, including the S-shaped dose-response curve and Schild plots, cannot be applied readily to mAbs. The large size of the antibody molecules means that they have to be administered i.v. or s.c. They are largely restricted by diffusion into the plasma space and targets that are circulating in the blood stream or on accessible cell membranes. mAbs can diffuse into areas of inflammation where the vasculature is more permeable, such as an inflamed joint, but only cross the normal blood-brain barrier in very small amounts and only if they are transported. The mechanisms of entry are not entirely clear. Good clinical responses have been seen in multiple sclerosis patients but these may depend upon systemic actions. The very high affinity for the target means that subtargetsaturating doses disappear from the circulations in a few seconds. The mAbs are almost always given in supramaximal doses to maintain an effective concentration for days or weeks. To refer to a dose response of a mAb often means the effect on the duration of action. The pharmacokinetics (PK) of therapeutic antibodies has been extensively investigated (Dostalek et al., 2013) . Native IgG antibodies have a half-life of 20-30 days, an attractive property of molecules that have to be administered parenterally. In the absence of target-mediated clearance, mAbs have a similar long half-life. mAbs are largely restricted to the plasma space because of their size and resulting slow diffusion. Antibodies and albumin are taken up through cell membranes by pinocytosis into vesicles where they bind to the MHC-related Fc receptor, FcRn. Binding to FcRn largely protects these proteins from intracellular catabolic degradation in the acidic endosome and allows recycling and release at the cell surface. Mutations introduced to enhance the Fc binding of the mAb to FcRn in the acidic endosome have been used to prolong antibody catabolic half-life. mAbs show a substantial amount of PK variability and a major factor is target-mediated clearance when the target is on a cell surface (Berinstein et al., 1998) . The initial dose is cleared rapidly as it binds to the target but a subsequent dose may have a much longer half-life because most of the target antigen sites are already occupied by first dose. Various measures have been investigated to alter both catabolic and target-mediated clearance, for example, by altering the isoelectric point of the mAb molecule. mAbs can themselves be immunogenic even in a fully humanized form. If they generate neutralizing anti-mAb antibodies, this can lead to reduction of efficacy. Systemic reactions to the infusion are usually manageable with antihistamines ± corticosteroids, often given prophylactically. Neutralizing antibodies are sought routinely in drug development but assays are quite difficult in the presence of an excess of unbound mAb. The first clue may be diminished activity in clinical use. The widest use of mAbs in therapeutics, to date, has been to inhibit cytokines, growth factors and immune cell functions but there have also been important application in oncology particularly for HER2-positive breast cancer (Hudis, 2007) . TNF-α antibodies were the first to be widely used and have transformed the treatment of rheumatoid arthritis. mAbs have had a major impact on immunotherapy with antibodies that lead to the destruction of cells carrying the appropriate epitope such as rituximab inducing apoptosis on the B-cells carrying the CD20 epitope. mAbs have been explored for their effect on CD3 carrying T-cells in a wide range of conditions including transplant rejection, Crohn's disease, ulcerative colitis and type 1 diabetes. The effect of the superagonist TGN1412 on the CD28 receptor of T-cells caused a near catastrophe in a group of healthy volunteers due to the intensity of the cytokine release reaction (Suntharalingam et al., 2006 , Eastwood et al., 2010 . A full report of the disaster and detailed recommendations on avoiding future problems with potentially high risk human studies was published in the Duff Report (Department of Health, 2006) . Potent inhibition of the immune system carries other risks, notably predisposition to infections and possible long-term effects on the incidence of tumours. Antibody drug conjugates (ADCs) attach a toxic or radioactive 'bomb' to an antibody directed at tumour specific antigen. The concept is not new but to attach a payload to an antibody that will be internalized and release the toxic moiety inside the cell has been substantially developed. Current products aim to bind a payload of 3 or 4 cytotoxic molecule to a mAb by chemical linkers that can be broken in the acidic endosome environment inside the cell. Mertansine (a toxic mytansinoid) has been attached to trastuzumab and proved effective in breast cancer (Hudis, 2007; Junttila et al., 2011) . Over 30 ADCs are under investigation and considerable attention is being directed at the chemical linker that may be cleavable or non-cleavable, its stability in the circulation, potential immunogenicity, etc. ADCs may herald a new era in oncology. Improvements in the chemical design of molecules are producing better quality molecules in terms of their developability and that improvement is industry-wide (Leeson and Springthorpe, 2007) . There have been rapid advances in highthroughput crystallography of protein molecules (Blundell et al., 2002) and fitting a jigsaw of small chemical fragment structures into the target (Hajduk and Greer, 2007) . Fragment-based drug design often yields smaller drug molecules than the incremental manipulation and addition of chemical groups to leads from high-throughput screens. There has also been considerable progress in computer modelling of chemical structures in three dimensions to assist lead optimization (Tetko et al., 2005) . The use of miniaturized fluid flow and other automated systems facilitates syntheses (Whitesides, 2006) . Pressure for greener chemistry has attracted great interest in use of enzymes, of which biology provides an enormous diversity as an alternative to conventional chemical reactions (Koeller and Wong, 2001) . Advances in medicinal chemistry have also made it easier to synthesize three-dimensional (3D) chiral molecules to increase structural diversity in screens. Substantial improvements in MS and NMR techniques have made it much easier to verify structures. An industrial synthetic chemistry laboratory these days has little resemblance to the wet chemistry of LiT1. The laboratory is filled with automated synthetic and analytical machines linked by automatic sample handlers while the medicinal chemists sit in front of their highresolution computer screens in another room reviewing the results and planning the next step. Medicinal chemists now pay more attention to concentrations at the site of action when the target is intracellular (Dollery, 2013) . Drugs that are capable of entering the cell either do so by permeation through the cell membrane or acting as a substrate for a transporter that may facilitate entry into the cytoplasm or egress from it. Some tissues, particularly the liver, kidney, gut and blood-brain barrier, have a large variety and high density of transporters. The distribution of transporters in a cell is often polarized with different transporters on the bile/urine or CSF face from those on the blood face. A large number of transporter molecules have been identified and a number have been shown to have important effects on widely used drugs P-gp (digoxin), BCRP (rosuvastatin), OATP1B1 or OATP1B3 (statins), OAT1 or OAT3 (methotrexate and tenofovir), and organic cation transporter OCT2 (metformin) (Food and Drug Administration, 2012) . Once in the cytoplasm, a basic drug may be concentrated in the lysosome because of the large pH gradient. At very high concentrations, this may impair lysosomal function, for example, by inhibiting phospholipases or lipid kinases. Increasing attention is being paid to mitochondrial toxicity, some of which may be due to drug concentration in the charged intermembrane space. Impairment of autophagocytic functions that slows removal of damaged structures such as mitochondria is important in overall cell function. Drugs have been developed for inhibiting the proteosome and are already in use for the treatment of multiple myeloma (Morabito et al., 2011) . Current methods for measuring cellular distribution of drugs and their metabolites rely on quantitative whole body autoradiography (QWBA), isolation of cell populations, homogenization of organs and matrix-assisted laser desorption ionization MS imaging techniques. Measuring distribution of a drug or its metabolites within a single cell is very much more difficult, but progress is being made by MS imaging techniques such as secondary ion MS. There has been recent interest in cellular thermal shift assays (Martinez Molina et al., 2013) . These are all ex vivo methods. The difficulty of measuring the concentrations of a molecule that engages with an intracellular target is a problem for pre-clinical safety assessment, pharmacology and clinical pharmacology. Reliance upon plasma concentrations may be misleading and may account for the relatively low power of many PK/PD correlations. Biological aspects of target selection and validation are an order of magnitude more complex and more difficult, than the chemical issues. A basic question is what is meant by a drug target in a translational sense? Choosing targets needs integrated thinking over a range of disciplines; it is not simply a matter of selecting an interesting molecule for a target that may play a role in disease. A recent paper from Pfizer set out three pillars of survival for a compound that is being developed as a medicine: (i) achieving sufficient concentration at the site of action; (ii) target binding; and (iii) expression of functional pharmacological activity. Pfizer investigated compound failures in man and found that 42% failed to achieve those three objectives so the reason for failure was never fully established (Morgan et al., 2012) . The paper overlooked the fourth, and arguably the most important pillar, on which therapeutics stands, the effect of the pharmacological action on the disease process. Biology has had to face big challenges not least to reduce the use of animals, particularly primates. Evaluating the biology of a potential target molecule in drug discovery raises many questions. What is its role in normal physiology? What is the evidence that the target plays a role in a human disease, and if it does, to what extent and in what circumstances? Which cells express the target? Is it accessible from the extracellular fluid? In what environment does it function? Is its activity dependent on essential associated proteins? Does its role differ substantially between pre-clinical species and in man? Are there human polymorphisms that may influence the phenotype of the target disease and provide a clue to the importance of the target? The eventual choice of a target is usually a single molecular entity but it is imperative to understand the role of that molecule in a complex system environment. The expertise needs run from bioinformatics through molecular and cell biology to systems biology, physiology, pharmacology, safety assessment and discovery medicine. The name now applied to this is systems pharmacology. Many universities, for example, Harvard, have announced that they are setting up special initiatives in systems pharmacology (ISP; Harvard Medical School, 2011) and the FDA has made policy statements in support of the concept. A major problem in implementation, in both academia and industry, is that the number of physiological and pharmacological scientists, particularly in human physiology and pharmacology, has been seriously depleted over the past 25 years. An essential component of early drug development is a reliable biological assay. Without it, the medicinal chemists cannot make progress. Often the lead molecule is selected by HTS using an assay based on cells overexpressing the human target molecule. Overexpressing the target molecule in a cell means that there are more binding sites available for the assay but it is in many ways an artificial system whose cells usually lack a matching complement of signal transduction molecules, co-factors, dimer partners, chaperones, etc. Particularly for agonists, there may be large differences (up 100-fold) between data from these artificial constructs and a phenotypically normal cell with the correct complement of associated human proteins and pathways. These concerns have lead to a greater emphasis on phenotypic assays but this term has a wide spectrum of meanings. These range from primary isolates of human or animal cells to 3D constructs, such as the 'lung on a chip' or cells (Wyss, 2013) adhering to a matrix such as are being used to model the liver (Sivaraman et al., 2005) . The ultimate phenotypic assay is to use intact pieces of animal or human normal tissue, such as those used by Jim Black and John Vane during LiT1. Preclinical biology is still very dependent upon animal studies for studies of integrated functions. There are two problems, the translational power of animal physiology and biochemistry to man and the translation of animal models of disease to man. Studies of intact healthy, unrestrained animals with telemetry monitoring of vital functions are of growing importance as is the drive to move into man much earlier, provided there is adequate safety coverage. The physiology of the cardiorespiratory system has translated to man reasonably well, the gastrointestinal tract of rodents has significant differences from man, the immune system of mice has large differences from man (Mestas and Hughes, 2004) . The higher cognitive and reasoning facilities of rodents have limited translation to man. The most widely explored cross-species biochemical systems comparison has been of plasma lipids. Unsurprisingly, non-human primates have the closest resemblance, and rabbits and rodents the least. Statin treatment does not decrease plasma LDL-c levels in rabbit or rodent models (Yin et al., 2012) . However, the general opinion of biologists is that data from normal animal physiological and biochemical studies are still valuable, provided appropriate species and tissues are chosen for the systems comparison. Physicians making decisions about human studies need to understand the potential limitation of translating these data to man. The translations of animal models of disease to man pose many difficulties. Many human diseases progress slowly over years, and as the condition evolves, different features appear. Environmental effects may be more important than genetics. Tissue inflammation may progress through different cell types and ultimately to fibrosis, for example, cirrhosis and advanced renal disease. Cardiovascular hypertrophy may be infiltrated by fibrosis and even calcification and become irreversible. It is very difficult to replicate either the aetiology (in many cases multifactorial in man) or the temporal evolution in a laboratory animal model. From a biologist's standpoint, disease models are the mainstay of their evidence to take a candidate molecule the next major step towards man. In so doing, they are likely to optimize their assays by selecting the species and circumstances that show the largest effect with the molecule under investigation. This assay selection bias is not intended to mislead but can lead to difficulty in reproducibility in other laboratories. There are also time constraints in this type of research so an animal model of osteoarthritis may be induced in weeks, whereas in man, the condition evolves over many years and the usefulness of the animal model for inferences to man is questionable. The use of SCID mice with transplanted human tumours is widespread in oncology research but the correlation with human response has substantial limitations. This is probably because of the major role the human immune system has in controlling tumour growth and the varying ability of tumours to inhibit monocyte/macrophage attack. There is currently great interest in 'humanized' animal models for research (Shultz et al., 2007) particularly using mice with an IL2rγ (null) mutated gene (Brehm et al., 2010) . Multitissue humanized mouse models are being developed but a substantial amount of work will be need to document the advantage, limitations and translatability to man (Legrand et al., 2009; van der Worp et al., 2010; Ito et al., 2012) . Humanized animal disease models may be more translatable to man than non-humanized but it is important to remember that they are only partly humanized. As with physiological models, some may be more translatable than others, and intensive research will be needed to authenticate the type of model and disease that has a high correlation with responses in man. Primary isolates of human tissue are invaluable to check the applicability of animal models to human tissues but they have limited availability and may not fully replicate integrated function when removed from their host environment. There is a growing literature about the difficulty of reproducing pharmaceutical and academic laboratory published results (Ioannidis, 2005) . Bayer scientists were unable to replicate about two-thirds of published studies identifying drug targets (Prinz et al., 2011) . While a small proportion of these data may be fraudulent, the main problem is probably that assays or test systems have been carefully optimized (although this is not always the case because of the desire for speed) to give the best results for that compound and biological system in the eyes of the experimenter and less encouraging data are not reported. Reviewers for pharmacology journals need to pay more attention to the limitation of inferences drawn from pre-clinical models to drug development in man. The old scientific adage 'don't believe anything until someone else has replicated it' applies. One possible approach to interspecies comparisons is the use of technology developed in the ENCODE project that makes it possible to display genome-wide distribution of transcription factor and enhancer sites and whether they are occupied. There are preliminary indications that these methods could be used as the readout in experimental physiology and pharmacology (Ecker et al., 2012) . If this potential can be realized on an affordable scale, it could be used to answer all sorts of important questions about species, organ and cell-specific properties in experimental animals and man. If a compound progresses through the pre-clinical hurdles, particularly of safety, it eventually reaches the stage known as 'first administration to man (FTIH)'. This is the time for a full review of all the available data, not just pre-clinical safety and pharmacology, but predicted pharmacokinetics including peak concentration and clearance, bioavailability, plasma protein binding, human dose, metabolic routes and the enzymes involved, tissue distribution including possible role of transporters and likely paths of elimination. It is a great help if these features can be built into a mathematical model. The initial dose will be based on calculations based on preclinical safety findings (no adverse effect level, NOAEL) and target engagement (minimal anticipated biological effect level, MABEL). Since the TGN1412 disaster, there has been particularly close attention to actions that might trigger an immunological chain reaction and strict review of any mechanism that might have that potential (Brennan et al., 2010) . FTIH studies are usually conducted in healthy volunteers with limited objectives, mainly safety and PK, although where possible measurements of a pharmacodynamic effect should be included. The PK modelling predictions should take into account high variability with poorly soluble drug variations in pre-systemic metabolism, plasma clearance and protein binding., and if they do not fit with human data, the model coefficients will need revision. From the PK, a calculation of target engagement and predicted magnitude of the pharmacodynamic effects over the range of exposures that are planned should be made. A model purely of PK is of limited value. For receptors and enzyme antagonists/inhibitors, a high degree of target engagements may be needed to produce a measureable pharmacodynamic effect, but for an agonist effect on receptors, only a small proportion may need to be engaged to produce a maximal response. Safety has become a dominant consideration in planning for FTIH, and a major factor in subsequent studies, for all but the most critical illnesses and even there it is the subject of very careful review. Discovery of an efficacious new drug has been a cause for excitement in LiT1 and LiT2 and continues in LiT3, but it is tempered by much greater attention to safety. Safety now rates as highly as efficacy and receives the same multidisciplinary scrutiny. The TGN1412 disaster (Blanas et al., 2006) is a constant reminder of the need for the utmost care, particularly in the early stages of administration of a new drug. The buzz words in safety parlance are risk-benefit, risk mitigation and risk management and they mean far more than just a warning note in the product label. It means, specifying the acceptable dose-exposure range, monitoring for predictable toxicity based on pre-clinical findings or known effects of the pharmacology and a strategy for managing adverse events if they occur. Patients consent must include a full explanation of risks and, critically, signs or symptoms that may require them to seek clinical advice urgently. This often means stopping treatment, or changing doses and reporting adverse events to regulatory agencies and drug developers. The FDAapproved Risk Evaluation and Mitigation Strategies (REMS) is currently the most comprehensive (Food and Drug Administration, 2013). The EMA has a similar pharmacovigilance programme with the abbreviation ERMS. When serious adverse reaction reports are returned by investigators for cardiac or liver issues, it is GlaxoSmithKline (GSK) practice to send by return an email with a detailed questionnaire to help understand the event and its connection to the clinical trial treatment. These have greatly improved the quality of information about serious adverse reactions. Safety evaluation of a new drug begins with an assessment of risk-benefit. Issues can be divided into two broad categories: the first are risks inherent in the drug molecule and the second are risks inherent in the genotype or phenotype of the patient. The aim of safety monitoring in man is to anticipate and detect adverse effects at an early stage. Both on-and off-target effects in pre-clinical species need careful review. The safety margin in pre-clinical species should be >10-fold (some would argue 100-fold) for the onset of potentially serious organ damage and even that may not be acceptable if there is no sensitive method for monitoring the effect in man, for example, seizures and pancreatitis. Route of administration, dose, potential for formation of reactive metabolites, the main routes of elimination measured in animals and predicted for man must all be reviewed. Possible effects of an overdose and the time it takes for the main pharmacological effect to wane should be evaluated. The latter is particularly an issue with mAbs that may maintain an effect for 3-4 months. mAbs that suppress immune function may lead to reactivation of the EB and JC viruses. In severe cases, plasma exchange may be the only way of terminating the effect. The second main category is risks inherent in the patient. New drugs are screened routinely to see if they are a substrate for CYP2D6, CYP2C9 and CYP2C19 and whether they have an effect on the HERG ion channel and the QT interval of the ECG. A compound that is predominantly excreted unchanged in the urine will need an alert for patients who may have impaired renal function. The risk potential must be considered for patients who may be at increased risk of adverse effects because of a wide range of intercurrent illnesses such a liver disease, renal impairment, heart failure, dementia and diabetes. Concurrent medications may lead to drug-drug interactions or recurrence of previous adverse reactions to similar products. If there has been concern about a finding in pre-clinical species (the most common are hepatic and cardiovascular), additional monitoring will be required in man. Routine cardiac monitoring has improved greatly with troponin I for cardiac myocyte injury, NT pro-BNP for cardiac wall stress, echocardiography to measuring ejection fraction and cardiac MRI techniques to measure chamber volumes, ejection fraction and myocardial wall contraction. Renal function testing in man still relies, to a great extent, on measuring serum creatinine and GFR, although more sensitive biomarkers for renal tubular injury have been qualified in animals, for example, KIM-1, albumin, total protein, β 2-microglobulin, cystatin C, clusterin and trefoil factor-3 (Dieterle et al., 2010) . These are being evaluated in man. Liver safety evaluation still relies on measurements of alanine aminotransferase (ALT) and bilirubin for the liver, although active work on new biomarkers is in progress, of which micro-RNA miR122 is the most promising. Hy's law (Temple, 2006) is still used to identify very high risk liver injury and is invoked when ALT rises to more than three times the upper limit of normal (ULN) and total bilirubin exceeds 2 ULN. All these tests need to be interpreted with care. About 20% of serum creatinine clearance by the kidney is by the transporter OCT2 and a number of drugs can inhibit this transporter, thereby raising serum creatinine and causing unnecessary concern without renal damage. Hy's law interpretation may be incorrect if the patient has the polymorphism of glucuronyl transferase UGT1A1 that is responsible for Gilbert's syndrome and impairs conjugation of bilirubin prior to excretion in the bile. A number of examples of 'idiosyncratic' toxicity of the liver and other tissues have been shown to have strong associations with HLA groups, suggesting an immunological mechanism for the liver injury (Spraggs et al., 2012) . Differences in world distribution of HLA factors must also be taken into account, for example, the Stevens-Johnson syndrome with carbamazepine (Chen et al., 2011) . Attention to safety extends far beyond early human studies. Known or suspected adverse effects are monitored throughout late phase clinical trials and GSK has a by-return email system for eliciting much more system-specific information for cardiovascular and liver safety adverse effects. The extension of late phase trials to many centres in many differ-ent countries has posed a challenge for the general standard of available medical care away from the trial centre, which will have been carefully assessed, may differ. It is increasingly common for regulatory authorities to require large postapproval safety studies, for example, in diabetes. These issues will become ever more important as trials in many common diseases focus upon earlier stages of disease where safety becomes a paramount factor. These studies are timeconsuming and costly. The standard way of developing new medicines is Phase I in healthy volunteers with little more than pharmacokinetics, dose ranging and safety measurements followed by Phase IIA PoC in patients with the target disease. Translational medicine investigators often refer to PoC as their endpoint but this often muddles proof of a pharmacological effect (PoP) with proof of therapeutic efficacy (PoT). It is highly desirable to separate PoP from PoT for clarity of thinking if problems develop and, more generally, to construct a response surface that includes desirable and undesirable actions. Without quantitative evidence of a human pharmacological action, it is difficult to progress to PoT with any confidence of success. Too often the highest doses are carried forward to maximize the chances of a therapeutic response with little regard to the symptom burden and safety issues that may be carried with it. In oncology, the concept of maximum-tolerated dose is still sometimes used. It is very important to obtain a good dose/exposure response curve over the safe PK and pharmacodynamic range. Lack of quality dose/concentration response data can cause problems through the rest of drug development. For reasons of safety and convenience, PoC is often carried out in patients with disease of moderate severity and in limited numbers with relatively simple protocols. Increasingly physician-scientists who carry out such studies are adopting a different approach that they term experimental medicine and which my late colleague, EJ Moran Campbell, referred to 'as finding out what you are going to find out'. These are small-scale, very intensively monitored studies whose objective is to investigate the effect of the trial compound on pharmacological and/or disease mechanisms, patient safety and symptoms. The aim is to have a much better understanding of the drug effects and the most relevant parameters to measure when it comes to designing a formal proof of PoP and PoT. Basic pharmacological measurements have always been carried out with new drugs, usually in more than one preclinical species and often with confirmatory phenotypic assays that may include human tissue. These data form the basis for calculating the concentration needed to reach the target in man to achieve effects ranging from minimum to maximum of the dose-response curve. The simplest measures of pharmacodynamic effects in man are when there is a readily measureable physiological response such as slowing of the heart rate with a β-adrenoceptor antagonist, or sedation with an orexin antagonist. For accessible targets with a measureable response, agonist challenge can provide a useful way of evaluating an antagonist, for example, reduction of an i.d. antigen weal and flare in an atopic individual given an oral antihistamine to parenteral LPS challenge to study a potent systemic anti-inflammatory agent. Tissues such as white blood cells, skin, endoscopic or operative tissue samples are very useful for human pharmacology if they express the target molecule. Increasing emphasis on assays with human tissue is a very useful cross-check on IC50 or the EC50 assays with cell lines expressing the human target protein. Whatever assay is used, the investigator must have a good understanding of its accuracy, reliability and reproducibility in the circumstances of a human volunteer or patient study and preliminary studies without the test article may be needed to validate the assay. One important item that is often overlooked in early human studies is accurate recording of symptoms using a volunteer or patient completed questionnaire, before and during a drug response. Symptoms represent a record of the input from the subjects' own sensory systems, gut, brain, etc., that are often more sensitive than anything else available. Some symptoms are common, for example, headache, often ascribed to caffeine deprivation, but others such as sedation, sleep disturbance, inattention, nausea, abdominal discomfort, diarrhoea and unusual fatigue are early clues to a pharmacodynamic effect. These concerns are often not reported by patients or by carers and adverse reaction reports greatly underestimate them. Where appropriate, very simple methods such as cine photography of gait, measurement of changes in 6 min walking distance, unsteadiness, computerized cognitive function tests and patient completed quality of life questionnaires can provide very useful additional information, particularly in an ageing population. For less accessible targets, positron emission tomography (PET) deserves special mention because of its ability to deliver important pharmacological information. The original use of PET in man was to obtain information about entry and distribution of a labelled molecule in the brain. Since then, it has been applied to many body organs. The use of cold ligand displacement has made it possible to plot a dose-exposure target occupation curve, information that is not obtainable by other means in an intact human. Basically, an in vivo 'S'-shaped curve of receptor occupancy (Matthews et al., 2012; Kwee et al., 2013) and is also useful in small animals (Lancelot and Zimmer, 2010) . Making the radiolabelled compound, usually with 11 C, limits the number of studies in one individual, and arterial blood sampling to study the profile of the radiolabel and metabolites presented to the target tissue pose some limitations. PET is an invaluable technique for human studies in translational medicine. Obtaining a quantitative pharmacological signal in man in some situations can be very difficult. Biomarkers, ranging from transcriptomics, micro-RNA proteomics, metabolomics, are being used to provide some information (see below). They are only really acceptable for PoP if they have a clear mechanistic link to the target. The link between the pharmacodynamic effect and the therapeutic response can be very close and easy to measure, for example, inhibition of the histamine H2 receptor and gastric acid secretion, but more often, it has indirect and incomplete effects on complex systems such as dementia, pain, cancer many kinds of degenerative changes such as chronic inflammation, tissue damage and fibrosis. A basic problem with PoT is that the pathophysiology of the diseases often evolves slowly and eventually some features may become irreversible, for example, a severely damaged joint surface or loss of neuronal function in the brain. Response to a therapeutic intervention may depend upon the stage of the disease and may be slow even when the drug is having a desired effect. These questions can be answered by large, long-term, costly, clinical trials, but these limit the number of compounds that can be investigated. Progress is being made by identifying patient subgroups most likely to respond, such as the 40% of melanoma patients that carry the V600E mutation increasing BRAF expression. Identification of a single factor that makes major contribution to aetiology in a common disease is rare and translational medicine badly needs more sensitive and shorter duration methods for a preliminary assessment of therapeutic response to give some indication of a potential therapeutic effect. The pressure to find new methods of measuring drug action in man that do not require lengthy clinical trials is intense. It is one of the biggest bottlenecks in drug development. From it, the biomarker boom was born. The techniques available have advanced considerably in LiT3 and they range from imaging to omics. The greatest impact on clinical diagnosis in the last 30 years has come from imaging and this has evolved from generating static two-dimensional images to quantified, 3D, dynamic images of brain, heart, lung and other tissues. These methods have important applications, particularly in the earlier phases of drug development (van der Geest and Reiber, 1999; Ley-Zaporozhan et al., 2008; Johnson et al., 2012) , although the most sophisticated imaging equipment may not be widely available for large, world-wide, clinical trials. A detailed consideration of these methods is beyond the scope of this review but they are a key to many early investigational studies of drug action. Functional MRI deserves special mention (Bandettini, 2012) . As novel target classes expanded, the need for new methods for measuring drug action, particularly to obtain a quicker and more specific readout, became a high priority. The terms biomarkers and omics have become an important part of the verbal currency of translational medicine. Biomarker is a term that enthusiasts use to encompass everything from a pulse rate measurement to a high sensitivity assay of picograms of a signal molecule in blood. Omics became one of the buzz words of investigative and translational medicine and is attached as a suffix to a number of different techniques. Users of these assays are not always aware of their limitations in selectivity, precision and accuracy, and the extent to which they have been validated for measuring a biochemical, physiological, pharmacological, diagnostic or therapeutic response. Many biomarkers and omics need more work to validate them for important decisions. One of the consequences of the growth of these techniques has been the need to integrate more than one complex data set, for example, proteomics, metabolomics and transcriptomics, rather than relying on a single parameter such as serum creatinine or bilirubin. There is much work to be done to validate these new methods against long established methods and clinical parameters. Some of the most commonly studied biomarkers in translational medicine are the cytokines and chemokines released during inflammation and these have become very important drug targets and indicators of pharmacological activity. Initially, these relied on immunoassays that were not always completely specific but immunoasays have improved and are being replaced, in many cases, by MS assays with much higher specificity. The targets are too many to list but they range from the traditional C-reactive protein through TNF, IL-1, IL-6, IL-10, IFN and chemokines (O'Shea and Murray, 2008) . The first to be widely used in medicine were antibodies directed against TNF in rheumatoid arthritis. The results were excellent in many patients, but complete resolution of the disease was not common and about a third of patients do not respond. Interventions with different mAbs on TNF cytokine or its receptor may have different mechanistic effects (Tracey et al., 2008) . Cytokine responses are often pleiotropic and the full range of activity of a cytokine such as IL-6 was only discovered as mAbs were developed against the cytokine and its receptor and used to treat human diseases such as rheumatoid arthritis (Feldmann and Maini, 2010; Kishimoto, 2010) . A new range of cytokine-related drug targets are the JAKs that bind to the cytoplasmic region of the transmembrane-cytokine receptors and signal by the STATS Pathway (Okamoto and Kobayashi, 2011) . Chemokines are a particularly complex area. More than 50 different chemokines and 20 different chemokine receptors have been cloned but their pleiotropic effects make study of single agent interventions very difficult (Ratajczak et al., 2006) . There are over a thousand microRNAs. Most of the conserved microRNAs repress genes with a wide variety of biological and molecular functions (Bartel, 2009) . In translational medicine, the main interest has been in microRNAs in plasma as they are chemically stable. They are proving to be useful markers of liver dysfunction, for example miR122 and miR 34A (Wang et al., 2012) , and are being studied as an additional signal of cardiac damage (Dorn, 2011) . There are potential applications in cancer using microRNA levels as a guide to tumour response (Kasinsk and Slack, 2011) . However, interpretation, as with other 'omics' (see below), is not easy as the measurement in plasma is more a measure of leakage from cells than of a direct pharmacodynamic action of a drug that has a mechanism-linked effect. The growth of MS with the ability to analyse 1000 or more molecules simultaneously has given 'omics' the ability to generate enormous quantities of data with the need for complex statistical analysis of a forest of peaks. There is a Nature database on 'omics (Nature/omics, 2010). These methods have great potential value, but a 2013 comment in Nature was, 'Despite numerous publications, however, few omics-based predictors have been translated successfully into clinically useful tests ' (McShane et al., 2013) . This cautious comment refers mainly to diagnostic tests and it does not mean that 'omics' may not be useful initially in experimental medicine studies and in toxicology followed by wider use as the ability to interpret the data progresses. There is a need for a systematic consortium approach (as has already taken place to some extent in toxicology) to look upon 'omics' as a very powerful new set of tools for making physiological, pharmacological, pathophysiological and therapeutic measurements, in that order. Running with omics, before the scientists and physicians who want to use them, have learnt how to interpret data in healthy animals and humans will not be the most helpful approach to establishing their long-term utility. RNA sequencing methods have revealed the extent and complexity of eukaryotic transcriptomes that include large sections of the non-protein coding regions of the genome hitherto regarded as junk (Wang et al., 2009 ). This has been the basis of the ENCODE project. Transcriptomics is very widely used in biology and drug development to obtain a more complete picture of gene activity in a tissue in response to a disease or a pharmacological intervention. For example, to investigate expression of genes encoding proinflammatory cytokines, tissue repair or proliferation in safety assessment (Searfoss et al, 2005; Cui and Paules, 2010) . It is less widely used in man than pre-clinically, because of limited access to tissues. Proteomics went through a period when masses of data were being produced, of variable quality with an emphasis on pattern recognition in interpretation, but that situation is changing (Bantscheff et al., 2007; Nature Editorial, 2005 ). An EBI database called Pride has been established to standardize nomenclature and data storage (http://www.ebi.ac.uk/pride) and there have been enormous advances in protein MS (Shen et al., 2005) with simultaneous proteomic and metabolomic analysis. The human body fluid proteome with its high dynamic range in protein concentrations, quantitation problems and complexity present enormous challenges (Apweiler et al. 2009 ). There are still mixed opinion about the value of proteomics in diagnosis (Bonislawski, 2013) but its use in studying the physiology and pathophysiology of systems in organisms as varied as potato tubers and humans looks promising (Moore and Weeks, 2011). Proteomic studies are of increasing value in clinical pharmacology using MS techniques that can identify the extent of protein glycosylation and phosphorylation. There is still some uncertainty about how useful protein profiles will be in translational medicine (Rifai et al., 2006) but stable isotope pulse labelling in man with deuterium or 13 C, before and after a therapeutic intervention, has real promise in turnover measurements. Considerable efforts are also being made to develop proteome algorithms for the diagnosis and assessment of treatment of cancer (National Cancer Institute, 2013) and inflammatory disease (Tesch et al., 2010) , but some uncertainty remains about their specificity and the initial enthusiasm in the clinical arena has been tempered (Hanash and Tagushi, 2010) . This is another biomarker area where there is great interest in developing diagnostic and treatment profiles (Patti et al., 2012) . The official metabolome database now has over 40 000 entries (Human Metabolome Database). Metabolomics, like proteomics, has been revolutionized by the ability to make simultaneous measurements of thousands of small molecules in biological fluids such as plasma and urine. The major problems are the chemical complexity and diversity of metabolites, many of which are short-lived. Disease-related patterns have been identified and some pattern changes related to drug therapy have been proposed (Kaddurah-Daouk et al., 2008) . Metabolomics studies that are conducted as part of experimental medicine are beginning to yield important data in areas as varied as glucose metabolism in diabetes (Ho et al., 2013) , cardiac metabolism studied using coronary sinus blood samples (Turer et al., 2009) and the growing area of tumour metabolism (Sreekuma et al., 2009 . Applications in the pharmaceutical industry are increasing with the potential to secure data about drug effects at earlier time points after drug administration and to study the time course of effects as the profile of drug concentration changes (Wei, 2011) . At the time of writing it is too early to be confident of the value of omics as the sole methods of assessing pharmacological or therapeutic responses. More work needs to be done with cross checking against validated methods and taking into account that most older patients have more than one health problem and therapeutic intervention that will complicate interpretation. PoT in clinical trials, using biomarker signals to assess efficacy, is a difficult area. Major regulatory bodies remain sceptically short of extensive validation against hard clinical endpoints. The problem for the enthusiasts is that it is not just a matter of a good correlation of biomarkers in one or two large clinical trials with hard endpoints but a much more extensive study in other disease conditions and treatments to see if the biomarker is stable and reproducible. A good example is the use of troponin I (cTnI) as a marker of myocardial injury. It is very well-validated clinically in acute coronary syndromes or myocardial infarction and in pre-clinical safety assessment. However, cTnI can be raised in a whole range of other situations, heavy physical exercise, occasional brief peaks in older people, renal failure, etc. It is still extremely valuable and better authenticated than almost any other biomarker but its interpretation can still be challenging. For developers who are willing to risk a 'PoT wobble', a well-validated biomarker might be used as the endpoint in an a dose-adaptive trial design in Phase II and then, if positive, progress to a hard clinical endpoint confirmatory trial in Phase III. However, the wobble may cost a lot if the biomarker does not confirm the Phase III result. The need is to explore the value of 'omic' endpoints using standardized methods in existing clinical trials with hard endpoints (and not just in trials focused on that specific problem) to create a database that can begin to validate them in scientific and regulatory terms. Most common human diseases have multifactorial processes at work in initiating, developing, exacerbating and complicating the evolution of the disease. Intervention on single factors such as LDL cholesterol or high BP has had great successes but many diseases have not responded adequately, and pathway analysis and identification of multiple points of intervention seems likely to be the way forward. Many common diseases are already treated with multi-drug combinations, such as most cancers, tuberculosis, HIV and type II diabetes. The days of seeing a Paul Ehrlich magic bullet aimed at single disease target are not over but an era of pre-designed, multi-drug treatment is the way forward for translational medicine. We are not lost in translation but it will need new ways of thinking about control systems and pathways in logical and authenticated multi-component treatment regimens. Clinical trials have grown larger and larger, and more and more costly most of them with a classical, parallel group, design with an active or placebo comparator. Efforts are being made to simplify design and cost with very large simple trials that have broad selection criteria and a single, clear-cut, adjudicated endpoint. Adaptive designs can, in principle, allow the dose to be optimized as the trial progresses. Bayesian trials are being used more often as the 'priors' can be updated as a study progresses. Large, simple, Pero-esque trials have an important role but their main application falls late in development when the drug is in extensive use. There are many advocates of purely observational trials using information from large health care or insurance databases. They are only practical when large numbers of patients are taking the drug in question that usually means some time after marketing. These suffer from what statisticians term 'channelling bias' as they are not randomized and the prescriber's reason for selecting one treatment over another are not known. Trials that involve study of 2, 3 or 4 component combinations are very complicated, particularly if the intention is to select the (desirable) optimal doses of each component in the combination. Trials increase in size as developers seek to achieve a 10-15% margin of benefit, often the minimum that purchaser will consider, over the best existing therapy. When there are reasonably effective drugs for that indication already on the market, 15% better may be an insurmountable hurdle. Trial endpoints are often clinical features that only show improvement over long periods of treatment, for example, dementia. There is a widely held view that translational medicine cannot go on like this. Advocates of the 'omics' would like to see biomarkers accepted as regulatory endpoints for product licensing but, even if that happens, it is almost certain that purchasers will continue to insist on hard clinical endpoints to which they can attach value in cash terms. Breakthroughs dominate the headlines, but purchasers need to have more understanding about the incremental nature of many medical advances, splendidly narrated in Siddhartha Mukherjee's biography of cancer 'The Emperor of All Maladies' (2010). Purchasers, and in consequence, drug developers pay a lot of attention to matters like once a day dosing with a 24 h PK profile, but limited attention to the patient value of developing medicines with similar pharmacology but a worthwhile reduction in side effects. Purchasers appear not to put much value on reducing the symptom burden of many therapeutic agents unless they are severe. Is anyone trying to develop a potent corticosteroid that does not cause sleep disturbance? The outlook may not be as gloomy as the pessimists believe. Past breakthroughs have usually been proven in severe stages of disease, which then trickle down to less and less severe clinical situations. The evolution of treatment for high BP is the best example. But would present-day purchasers have paid a premium price for those incremental changes over 25 years? Drugs tested in one situation may prove to have value in another, for example, the thalidomide derivative lenalidomide used in multiple myeloma. If attempts to subdivide patients with common diseases into responsive and non-responsive groups succeed, personalized medicine will grow. Early hopes from the HGP have not had much impact but the explosive growth of new information about the control of body systems through 'omics' and ENCODE should strengthen it. There will have to be some adjustment of regulatory and purchaser outlooks. Medicines that are as effective but more convenient to take and cause a much lower burden of symptomatic side effects than existing therapy need greater recognition. For older patients, swallowing a large tablet may be much more difficult than a small one. It is often the case that it is only when a new medicine is on the market for a lengthy period that its benefits and risks become clearer. Would earlier marketing be permitted if the country concerned had a fully operational electronic health care record system that would allow risk-benefit data about a new drug to be continuously updated. Would purchasers be willing to adjust the initial premium price down if it did not deliver the degree of hoped for benefit or upward if it delivered unexpectedly good results? 'Drugs don't work in patients who don't take them' (Everett Koop, former US Surgeon General). In the past, drug developers have regarded their job as done when a major regulatory body, such as the EMA or FDA, has approved a product for marketing. The approved product label specifies what it can be used for, the dose, warning and precautions with an implicit assumption that these govern how it will be used (Dusetzina et al., 2012) . The gap between the contents of the label and the way a medicine may be used in practice can be wide (Ryan et al., 2011) and at least some of the problems that arise reflect back on the much earlier stages of translational medicine. Drug developers, prescribing physicians and their patients share responsibility for this situation. The social-medicine component of the problem is epitomized by the low adherence to medications of patients who are depressed. The hazard ratio of poor adherence was 1.76 based on the response to a simple patient completed question about being depressed (Grenard et al., 2011) . Patients who are depressed, bereaved, widowed, unemployed, etc., are less likely to take a prescribed medication regularly, but few physicians or drug developers consider the effect that resulting depression may have on adherence to critical treatment of other conditions. A substantial proportion of patients do not take their medicines in the dose and at the time intervals for which it was prescribed, an issue that is described with a variety of terms such as compliance or, more politely, adherence. A review of adherence to inhaled corticosteroids or oral prednisolone in severe asthma concluded that only 30-50% of patients took the prescribed doses and failure to do so was a material factor in exacerbations (Williams et al., 2004) . Poor adherence has been reported in a variety of cardiovascular and metabolic diseases (Nelson et al., 2006) . Much higher adherence rates have been reported in randomized controlled trials (RCTs) but in the original MRC tuberculosis trial, that used streptomycin and PAS, random home visits checking PAS in urine showed a positive result in only half the patients although all had been warned of the risk of not taking their medication regularly. This was probably because of the very large capsules of PAS and the attendant gastrointestinal side effects. Assessment of adherence is difficult as patients report higher figures than independent verification demonstrates, such as the interval between refilling their prescription at a pharmacy. Poor adherence adversely affects cardiovascular outcome, after a myocardial infarction (Choudhry and Winkelmayer, 2008) . While many adherence failures are unrelated to drug side effects, certain drugs with relatively severe side effects such as HAART for HIV, some oncology drugs that cause severe symptoms such as diarrhoea and antidepressants that impair male sexual function do have adverse effects on adherence (Grenard et al., 2011) . Although there is less published information about mild to moderate symptoms, there is little doubt that they also contribute to poor adherence, for example, with tamoxifen adjuvant treatment of breast cancer (Partridge et al., 2003) that has recently revived concern in British newspapers. Patients who omit doses, and do not disclose it to their doctor, may be prescribed a higher dose to achieve the desired effect. The result is either adverse effects from too high a dose, if the patients take the new dose, or a widening gap between the dose taken and the dose prescribed if they do not. Wider use of patient-completed questionnaires to record symptoms of medicines would provide much more reliable data than formal, regulatory, adverse reaction reports. Closer attention to measurement of symptoms, and their effect on the quality of life and adherence to medicines, ought to feature more prominently in drug development. Adherence can also be an issue in clinical trials despite measures such as tablet counts. Electronic pill boxes with alarms and recording of opening times and dates are sometimes used to try and improve adherence as are regular reminders by telephone or email. There is an old saying in clinical medicine that a devoted and reliable spouse is the best guarantee that medicines will be taken regularly, but many elderly patients living alone do not have the benefit of that high value help, and poor adherence may often be the reason for treatment failure. Big Data is a byword of our time and biology/medicine is one of its greatest challenges. Big Data is a collection of data so large and complex that it becomes difficult to process using standard database retrieval and analysis tools like EXCEL and Spotfire. The generation, acquisition, storage and analysis of biological data in projects such as the HGP, GWAS and ENCODE threaten to be outpaced by the ever rising influx rate of new data (Gerstein, 2012; Marx, 2013) . The files are growing so large that even transferring them from the primary storage (often in a 'cloud' database) to a laboratory for analysis is becoming impractical. The analysis may also have to be done in the cloud (Dai et al., 2012 ). An example is the European Bioinformatics Institute (EBI) which currently stores 20 petabytes (1 petabyte is 10 15 bytes) of data and back-ups about genes, proteins and small molecules, and is growing all the time (Marx, 2013) . The ENCODE database at UCS has 50 terabytes in organized data and another 200 terabytes of raw data (Genome.UCSC.edu/ENCODE). The use of Big Data in translational medicine falls under several headings. The first heading is necessity. Much current and future research will rely on handling very large amounts of data flowing in real time from ongoing research, prompt analysis of critical information and storing the whole in readily searchable formats. It will be used to make decisions in pre-clinical and human translational studies, test hypotheses, generate new ideas and check results against earlier data. A few projects will be performed entirely in silico but most will iterate between laboratory and clinical experimental data and computer modelling of systems and pathways. Projects like ENCODE have great potential in physiology, pharmacology and medicine to map transcription factor sites in different tissues in health and disease but they, inevitably, generate enormous amounts of data in genome-wide scans. The problem is that although the coding part of the genome is basically the same in all nucleated body cells, the transcription factor profile is different in the large number of different human body cell types (>400) both in a normal physiological state and even more in abnormal situations. Great efforts have been made to develop ENCODE software analysis tools in parallel, but these require some experience and training to make full use of them. The second heading is data mining from historical data collections, from wet laboratory experiments to large clinical trials. A difficulty is that the protocols used in different trials to select patients, drug doses and endpoints are often disparate and may have used methods of measuring endpoints that are no longer used. A major effort is being made to bring together data on placebo groups in large trials of conditions such as Alzheimer's disease from research-based pharmaceutical company files. Unfortunately, many 'historic' documents (more than 10 years old) lack clear summaries and important information such as detailed information on protocols, assay methods, and trial endpoints and adverse reaction reports. If they still exist, they may be filed in different places. Modern translational medicine researchers also require fast links to a multitude of external databases for chemical structures, genes, polymorphisms, drug targets, disease classification, etc. Links alone are not enough; they also require fast standardized search routines rather than learning a different system for every database they consult. A third, rapidly growing area of Big Data is ongoing scrutiny of health trends and the effect of new and old medicines on both safety and efficacy (Wang et al., 2012) , a major extension of the Cochrane database approach (cochrane.org) that relies on published data. An evolving concept is to bring together data from many large electronic health record databases to analyse anonymized data on issues like disease incidence, drug efficacy and safety in tens of millions of well-documented patients in different countries and parts of the world. This is an area where the very large informatics companies have become interested (ibmdatamag.com). In principle, it should be possible to deploy these resources to undertake very large-scale trials with simple protocols and point of care randomization. This could reduce costs and yield a more representative population for the late stage and post-marketing assessment of translational medicine. Such large-scale projects are raising some concerns about privacy and ethics (Ioannidis, 2013) . Wide use of Big Data in translational medicine is an inevitable development. Progress is being made with electronic 'reading' of documents to extract intelligible, non-numerical, information, from large documents. It will provide a powerful set of new tools but there will have to be well-trained bioinformaticians, epidemiologists, statisticians, biological scientists and physicians who understand the underlying assumptions, strengths and weaknesses inherent in complex data retrieval and analysis. There is considerable scope for reaching misleading conclusions unless findings are crosschecked and replicated. A major issue in therapeutics is when to intervene. In many diseases, most cancers, dementias, arthritides, psychosis, the current therapeutic interventions are either ineffective or delivered at a stage when extensive tissue malfunction has occurred, much of it not reversible. The stages of many chronic diseases have a big effect on response to different kinds of pharmacology and need more attention in drug development. The pressure now is to move to earlier interventions. Pharmacology already plays an important role in preventive medicine. The increase in life expectancy in the past decade is owed much to the control of BP and lowering of LDL-cholesterol and are examples of preventive pharmacology as is the use of low-dose aspirin to prevent myocardial infarction. For medicine, there are major challenges in detecting disease at a very early stage, predicting progression and measuring an indicative response to an intervention in a relatively short period of time. Clinical trials that might take 10 years to reach an endpoint face challenging problems of cost, adherence to medication and administrative sustainability. Medicines intended for this early stage also have onerous requirements for safety and only a very low incidence and severity of symptomatic side effects would be acceptable. In diseases of long duration, the factors that currently maintain the illness may not be the same as those that initiated it. The translational medicine of early intervention needs a lot of original thinking. The key is better understanding of the factors initiating and maintaining human illness, which must be a top priority. The environment for early onset translational medicine is challenging but fortunately there are still some exciting opportunities. Purchasers are not perfect judges but neither are the pharmaceutical companies that market drugs. Drugs that have been marketed with very limited expectation have sometimes succeeded beyond all predictions. Bayer aspirin was only expected to sell about 60 kg a year when it was introduced. The reverse has happened on many occasions when great expectations about a new medicine have not been fulfilled. Decisions on licensing are based on the available RCT results that may include 10 000 or more carefully selected patients, most of them supervised by investigators who offer a good standard of medical care. Patient-years of exposure in RCTs will be substantially smaller even with drugs that are intended for very long-term use. Such studies may not predict real-world experience accurately where prescribing decisions, patient behaviour and the prevailing standard of medical care are much more variable. One solution is to employ 'large simple' RCTs with broad inclusion criteria, large numbers of events, minimal data requirements and robust endpoints, with electronic registries or health records to track patients. Another approach is to use 'cluster-based' randomization: clinics, nursing homes, schools are randomly selected to receive or not receive an intervention. A third approach is to use 'Point of care' randomization that embeds research into routine clinical care. Enrolment, follow-up and clinical event data are all collected within the context of their routine care (R. Horwitz, pers. comm.). Such randomized studies in the community can achieve a much closer approach to real-word practice than standard RCTs (although they are unlikely to replace them) and are much easier to interpret than purely observational studies without randomization. The conflict between protecting intellectual property and patents and sharing information is evolving in favour of greater transparency. One aim is to get closer to the LiT1 model by close and direct collaboration between industrial scientists and physicians with their counterparts in academia. At GSK, Academic Discovery Performance Units have been formed with academic clinical collaborators in completely shared projects with full access to relevant data and techniques. Other companies have somewhat similar initiatives. Clinical trials must now be registered with regulators and many companies now post the results of their trials on open access data bases. Willingness to share detailed data from earlier trials, for example, the placebo group in Alzheimer disease trials, is growing. Within industry, there is some development of what are termed 'walled gardens', where non-competitive information can be shared with other research-based companies. Making data in the files of regulatory authorities or companies available to qualified researchers is an objective that is slowly being realized. Industry has concerns, with some reason, that information from poorly informed searches of very complex files will be quoted selectively and cause a great deal of additional work. Fortunately, many pharmaceutical companies are recognizing that the value of greater transparency often outweighs the risk of giving too much help to competitors, but there has to be a limit during the early stages of new product development. The cost of new medicines has become an issue everywhere. Regulatory, scientific and medical requirements in discovery, but particularly in later phases of development, have soared in cost. Larger clinical trials are required to demonstrate efficacy and safety for registration with regulators. If there are concerns about safety of an individual product or a class effect, further large-scale, costly, trials may be required after registration. Approval by a major regulatory agency, although critically important, is only a step towards a much more important step, will purchasing agencies agreed to pay for it? Much of the cost of medicines in advanced countries is borne by taxpayers or insurers. Instead of an individual medical doctor choosing what medicines are available to his or her patients, national or regional purchasing agencies decide what they are willing to provide out of the available budget. A number of common diseases, such as hypertension and diabetes, are reasonably well managed with current treatments. There is still room for improvement but the threshold is high and purchasers will only pay a non-generic price if the new treatment has demonstrated a worthwhile incremental advantage in the improvement of health. They are the judges of what is a worthwhile increment. Agencies such as the UK National Institute for Health and Care Excellence evaluate cost-benefit, with a major emphasis on improvement in quality adjusted life years (QUALY). The maximum acceptable cost is said to be about £30 000 (ca. US $50 000) per QUALY, with only a few exceptions. Translational medicine achievements are translated into currency by both purchasers and pharmaceutical companies, although using rather different formulae. Meanwhile, the cost of developing new medicines threatens to outpace the rate of discovering them. The most exciting developments in recent years, mAbs, have also proved to be among the most expensive. To sustain research and development of new medicines, still much needed, the costs will have to be reduced, particularly of regulatory requirements and large clinical trials. In principle, both are possible if large simple trials based on communities with electronic health records become the norm with efficacy and safety data becoming available for analysis in real time. Drug discovery over the last 60 years has had cyclical ups and downs and has become accustomed to the great majority of projects failing. Almost all the major developments have been based on strong basic science, credible clinical evidence and advances in technology, but with single molecule targets. Many of the current major developments in basic and clinical science are based on applying very large data sets, GWAS, ENCODE, omics to complex pathways in biological control, disease aetiology and pathogenesis. The assumption is that many, perhaps most, of these will require multiple points of intervention, not single targets. Can translational medicine adapt to this new world? The answer must be, in principle, 'Yes' from an operational perspective. Unfortunately, there is a deeper layer of difficulty concerned with understanding the manipulation of control systems in the genome, the epigenome, and the complexity of protein to protein, cell to cell and organ to organism interactions. There are some powerful and specific tools in siRNAs, oligonucleotides, zinc-finger molecules and gene constructs. Most have already been used in clinical trials, for example, in some forms of inherited muscular dystrophy or immune deficiency, but the problems of delivering them in the right amount to the right cells, not most of them to the wrong cells or chromosomes, are formidable and their use may be limited to serious disease with no alternatives. Mipomersen, the first oligonucleotide approved by the FDA (for familial hypercholesterolaemia), was required to have a risk management strategy, because of limited safety information. The EMA refused approval because of liver safety concerns. We may have to depend, for some time, upon the tools we still have in small NMEs and mAbs for common diseases with multiplex aetiology until we devise new, and more precisely targeted, methods. There is an old story that a very senior VIP was visiting a research laboratory and asked a young PhD student what she was hoping to discover next year. The student gulped and replied, 'If I knew what I was going to discover next year I wouldn't wait until next year to discover it'. Translational medicine in LiT4 is a bit like a group of surfers riding The Great Wave Off Kanagawa by the famous Japanese artist, Katsushika Hokusai. All wear a swim vest emblazoned 'Translational Medicine' and in the lead are two with subtitles, 'Basic Pharmacology' and 'Clinical Pharmacology'. Riding the crest is enormously exhilarating but very challenging, and like the young PhD student, they are not quite sure what they are going to discover where or when the data tsunami throws them up on the beach. But some will ride the great wave to a happy landing and the progress of translational medicine will continue and prosper to the benefit of mankind. Complementary DNA sequencing: expressed sequence tags and human genome project A study of the adrenotropic receptors Approaching clinical proteomics: current state and future fields of application in fluid proteomics ISIS-2: 10 year survival among patients with suspected acute myocardial infarction in randomised comparison of intravenous streptokinase, oral aspirin, both, or neither. The ISIS-2 (Second International Study of Infarct Survival) Collaborative Group Twenty years of functional MRI: the science and the stories Quantitative mass spectrometry in proteomics: a critical review MicroRNAs: target recognition and regulatory functions Meta-analysis of multiple primary prevention trials of cardiovascular events using aspirin Association of serum Rituximab (IDEC-C2B8) concentration and anti-tumor response in the treatment of recurrent low-grade or follicular non-Hodgkin's lymphoma A personal view of pharmacology Cytokine storm in a phase 1 trial of the antiCD28 monoclonal antibodyTGN141 High-throughput crystallography for lead discovery in drug design The unsteady march of cancer protein biomarkers into the clinic. The Global Newsweekly of Proteomic Technology Humanized mouse models to study human diseases Safety and immunotoxicity assessment of immunomodulatory monoclonal antibodies Monoclonal antibody therapeutics: history and future Bispecific antibodies for cancer therapy. The light at the end of the tunnel? Carbamazepine-induced toxic effects and HLA-B*1502 screening in Taiwan Medication adherence after myocardial infarction: a long way left to go The White House Changes in mitochondrial DNA as a marker of nucleoside toxicity Use of transcriptomics in understanding mechanisms of drug-induced toxicity Bioinformatics clouds for big data manipulation Genetic polymorphisms affecting drug metabolism: recent advances and clinical aspects HIV-1 and T cell dynamics after interruption of highly active antiretroviral therapy (HAART) in patients with a history of sustained viral suppression Anti-HIV drugs: 25 compounds approved within 25 years after the discovery of HIV History of HAART -the true story of how effective multi-drug therapy was developed for treatment of HIV disease Renal biomarker qualification submission: a dialog between the FDA-EMEA and Predictive Safety Testing Consortium Intracellular drug concentrations MicroRNAs in cardiac disease Pharmacokinetics, pharmacodynamics and physiologically based pharmacokinetic modelling of monoclonal antibodies Sir Horace Smirk pioneer in drug treatment of hypertension Problems of communication of a drug regulatory authority Impact of FDA drug risk communications on health care utilization and health behaviors: a systematic review Monoclonal antibody TGN1412 trial failure explained by species differences in CD28 expression on CD4+ effector memory T-cells Genomics: ENCODE explained Crystal structure of intact IgG antibodies FDA Guidance for Industry Drug Interaction Studies, Study Design, Data Analysis, Implications for Dosing, and Labelling Recommendations Approved Risk Evaluation and Mitigation Strategies (REMS) Anti-TNF therapy, from rationale to standard of care: what lessons has it taught us Universal amplification, next-generation sequencing, and assembly of HIV-1 genomes Quantification in cardiac MRI Genomics: ENCODE leads the way on big data On the immortality of television sets: 'function' in the human genome according to the evolution-free gospel of ENCODE Depression and medication adherence in the treatment of chronic diseases in the United States: a meta-analysis Decade of fragment-based drug design: strategic advances and lessons learned The grand challenge to decipher the cancer proteome Initiative in Systems Pharmacology Whole genome deep sequencing of HIV-1 reveals the impact of early minor variants upon immune recognition during acute infection Aspirin and other antiplatelet agents in the secondary and primary prevention of cardiovascular disease Selective inhibitors of dihydrofolate reductase Metabolite profiles during oral glucose challenge Domain antibodies: proteins for therapy Trastuzumab -mechanism of action and use in clinical practice Strengthening-big-data-inthe-pharmaceutical-industry Tyrosine phosphorylation: thirty years and counting Why most published research findings are false Informed consent, big data, and the oxymoron of research that is not research Current advances in humanized mouse models Brain imaging in Alzheimer disease TiDM1) retains all the mechanisms of action of trastuzumab and efficiently inhibits growth of lapatanib insensitive breast cancer Metabolomics: a global biochemical approach to drug response and disease MicroRNAs en route to the clinic: progress in validating and targeting microRNAs for cancer therapy Barriers to a cure for HIV: new ways to target and eradicate HIV-1 reservoirs IL-6: from its discovery to clinical applications Enzymes for chemical synthesis Continuous cultures of fused cells secreting antibody of predefined specificity Dual targeting strategies with bispecific antibodies Overview of positron emission tomography, hybrid positron emission tomography instrumentation, and positron emission tomography quantification Small-animal positron emission tomography as a tool for neuropharmacology The influence of drug-like concepts on decision-making in medicinal chemistry Humanized mice for modeling human infectious disease: challenges, progress, and outlook A short history of thalidomide embryopathy Morphological and functional imaging in COPD with CT and MRI: present and future Molecular recognition of protein kinase binding pockets to design potent and selective kinase inhibitors Lead-and drug-like compounds: the rule-of-five revolution Antigen-specific human antibodies from mice comprising four distinct genetic modifications ENCODE: The human encyclopaedia Complete viral RNA genome sequencing of ultra-low copy samples by sequence-independent amplification A statin-dependent QTL for GATM expression is associated with statin-induced myopathy The protein kinase complement of the human genome Monitoring drug target engagement in cells and tissues using the cellular thermal shift assay Biology: the big challenges of big data Positron emission tomography molecular imaging for drug development Phage antibodies: filamentous phage displaying antibody variable domains Criteria for the use of omics-based predictors in clinical trials MRC trial of treatment of mild hypertension:principal results Of mice and not men: differences between mouse and human immunology Proteomics and systems biology: current and future applications in the nutritional sciences Safety and efficacy of bortezomib-melphalan-prednisonethalidomide followed by bortezomib-melphalan-prednisone (VMPT-VT) versus bortezomib-melphalan-prednisone (VMP) in untreated multiple myeloma patients with renal impairment Can the flow of medicines be improved? Fundamental pharmacokinetic and pharmacological principles toward improving phase II survival The Emperor of All Maladies: A Biography of Cancer Imatinib and its successors -how modern chemistry has changed drug development Office of Cancer Clinical Proteomics Research/CPTAC Data Portal Overview (2011-present) Proteomics', new order Home page Self-reported adherence with medication and cardiovascular disease outcomes in the Second Australian National Blood Pressure Study (ANBP2) Halothane hepatitis: a model of immune mediated hepatitis Tyrosine kinases in rheumatoid arthritis Cytokine signaling modules in inflammatory responses Nonadherence to adjuvant tamoxifen therapy in women with primary breast cancer Low dose aspirin and inhibition of thromboxane B2 production Metabolomics: the apogee of the omics trilogy Lipoprotein changes and reduction in the incidence of major coronary heart disease events in the Scandinavian simvastatin survival study (4S) Believe it or not: how much can we rely on published data on potential drug targets The pleiotropic effects of the SDF-1-CXCR4 axis in organogenesis, regeneration and tumorigenesis Protein biomarker discovery and validation: the long and uncertain path to clinical utility The influence of SLCO1B1 (OATP1B1) gene polymorphisms on response to statin therapy Consumer-oriented interventions for evidence-based prescribing and medicines use: an overview of systematic reviews Two phase 3 trials of Bapineuzumab in mild-to-moderate Alzheimer's Disease Computational insights for the discovery of non-ATP competitive inhibitors of MAP kinases Protein kinase biochemistry and drug discovery The role of transcriptome analysis in pre-clinical toxicology Automated 20 kpsi RPLC-MS and MS/MS with chromatographic peak capacities of 1000−1500 and capabilities in proteomics and metabolomics Humanized mice in translational biomedical research A microscale in vitro physiological model of the liver: predictive screens for drug metabolism and enzyme induction Lapatinib-induced liver injury characterized by class II HLA and Gilbert's syndrome genotypes Metabolomic profiles delineate potential role for sarcosine in prostate cancer progression Cytokine Storm in a Phase 1 Trial of the Anti-CD28 Monoclonal Antibody TGN1412 Hy's law: predicting serious hepatotoxicity Successes achieved and challenges ahead in translating biomarkers into clinical applications Virtual computational chemistry laboratory -design and description Coming up trumps: Genome-wide association studies Lovastatin and beyond: the history of HMG Co-A reductase inhibitors Tumor necrosis factor antagonist mechanisms of action: a comprehensive review Metabolomic profiling reveals distinct patterns of myocardial substrate use in humans with coronary artery disease or left ventricular dysfunction during surgical ischemia/reperfusion Thirty-three years of drug discovery and research with Dr Paul Janssen History of aspirin and its mechanism of action MicroRNAs in liver disease RNA-Seq: a revolutionary tool for transcriptomics Metabolomics and its practical value in pharmaceutical industry The effect of salicylates on the hemostatic properties of platelets in man The origins and the future of microfluidics Relationship between adherence to inhaled corticosteroids and poor outcomes among adults with asthma Can animal models of disease reliably inform human studies Untoward effects associated with practolol administration: oculomucocutaneous syndrome Wyss Institute for Biologically Inspired Engineering Plasma lipid profiling across species for the identification of optimal animal models of human dyslipidemia Translational and clinical science -time for a new vision This review is an account of the evolution of translational medicine based on 38 years experience in academia at the Royal Postgraduate Medical School, Hammersmith Hospital, London and 18 years in industry at GlaxoSmithKline where I am a full-time senior consultant to R&D management. I am grateful for the help and advice of many outstanding colleagues, in academia and industry, over those years. But the opinions expressed in this article are strictly my own. I have no financial interest in the contents. This review was suggested by the International Union of Basic and Clinical Pharmacology, Committee on Receptor Nomenclature and Drug Classification of which I am a member. I have no financial interest in the contents of this review.