Quantcast
Channel: Peer-Reviewed Articles on ADCs |
Viewing all 72 articles
Browse latest View live

Successful Strategies in the Development and Technology Transfer of Antibody-Drug Conjugates

$
0
0

Abstract:
Effective introduction of a monoclonal antibody or antibody-drug conjugate into clinical trials and final commercialization requires a defined path toward an efficient, robust manufacturing process with suitable quality parameters. In this process, efficient technology transfer between the drug-developing technology originator and an approved and authorized third party or parties is crucial. In this article, Cynthia Wooge, Ph.D., discusses the requirements of a properly designed technology transfer process and how to ensure a successful transfer and reproducible conjugation process with solid analytics, appropriate engineering design, process (quality) controls and quality assurance. She emphasizes the need to understand which details are critical and how they get effectively communicated between the various parties involved in the transfer process.

Keywords:
ADC Antibody-drug Conjugates, Technology Transfer, Biotechnology, Monoclonal Antibody, Antibody, FDA, NCI, TTC, Technology Transfer Center, CMO, Contract Manufacturing Organization


1.0 Introduction
Within the biotech and pharmaceutical industry, technology transfer refers to a systematic procedure that is followed in order to pass the documented knowledge and experience to an appropriate, responsible and authorized party. It is required to successfully progress from initial preclinical drug discovery to product development to clinical trials and, finally, to full-scale commercialization and distribution of a pharmaceutical drug or biologic. [1]

Technology transfer happens in many situations. A company might be addressing intra- or intercompany transfers of technology, expansion or relocation of operations, consolidations and mergers, or working with a third party like a government research institute or university. Alternatively, a company might be near market-ready with a pharmaceutical product, but not have commercial manufacturing capabilities, so they require support from one, or multiple, commercial contract manufacturing organizations (CMOs). There can also be the case of startup pharmaceutical or small biotechnology companies having a fully developed product and regulatory approval, but no functioning distribution channel.

In general, technology transfer—whether government-directed or between commercial parties —helps form alliances between development partners, create the framework for ongoing development of the technology, manufacturing, commercial development and distribution. Within this process, drug developers need to make their (proprietary) technology available to commercial partners that are part of the agreed-upon development path.

Overall, the process of technology transfer is complex. This complexity is primarily caused by a multitude of stakeholders involved in the development of pharmaceuticals. It is, however, a critical process.

2.0 Government and Academia
In the development of pharmaceuticals and biologics, technology transfer is quite common. For example, the National Cancer Institute, which is part of the U.S. National Institutes of Health (NIH), recognizes the importance of technology transfer in order to help translate basic science into direct benefits for public health. To facilitate the process, the institute has set up the Technology Transfer Center, which establishes formal relations and collaborative agreements between the pharmaceutical industry, academia and nonprofits, making co-development through technology transfer possible. The center’s main objective is to support the NIH’s mission. [2]

This work includes recommendations made to the NIH’s Office of Technology Transfer concerning the filing of domestic and foreign patent applications as well as working closely with NIH investigators and outside parties to facilitate commercialization efforts to benefit public health. All these activities are guided by the Federal Technology Transfer Act of 1986 and other federal laws (including the Stevenson-Wydler Technology Innovation Act and the Bayh-Dole Act) that use patents as incentives for commercial development of technologies and are designed to establish collaboration between academia, federal laboratories and the biopharmaceutical industry. [3]

In this case, the transfer of technology of federal funded research to the private sector is intended to bring pharmaceutical drugs and biologics to the marketplace sooner and more efficiently than is possible if a federal agency would have acted alone. [4] Similar organizations around the globe, including government as well as academic and commercial entities, are aiding and managing the technology transfer between various partners.

3.0 CMO Challenges
For CMOs, there are a number of challenges involved in technology transfer, including varied production, development status, timing requirements, and communication flows. [Figure 1: Process development approach; Figure 2: GMP Manufacturing Communication Flow] This is particularly true in the transfer of complex technology, such as in immunoconjugates or antibody-drug conjugates (ADCs). In any situation, successful technology transfer depends on robust communication between originator and CMO.

 

Fig_1_SAFC_Wooge

From an organizational perspective, analytical method development and process development are key in successful technology transfer. It is important that the technology transfer covers all the key analytics, as well as enabling process chemistry and ensuring raw or source-material release in advance of manufacturing. In these cases, analytical method development and process development are concurrently managed to allow rapid communication and coordination of both activities. Yet this requires a balancing act because a “product” is needed to develop the analytical methods, while, on the other hand, the “right product” does not exist until the appropriate analytical methods are in place. [5] [6]

 

 

FIG_2_SAFC_Wooge

4.0 Quality by Design
A successful technology transfer process incorporates Quality by Design (QbD) principles: a systematic, streamlined approach that begins with a predetermined objective and emphasizes project execution based on accurate product and process understanding, robust process control, complete and accurate communication. Based on the guidelines from the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH Q8), this approach is based on sound science and quality risk management methods. [Figure 3: Application of Analytical Methods]

 

FIG_3_SAFC_Wooge

To comply with the ICH Q8 definition, the first step in effective technology transfer is the development of a transfer plan detailing all steps and criteria of the process, outlining – if applicable—the flow of the excising process and critical process parameters. In this plan, the expected results should be included.

Implementing QbD principles further involves risk assessment of potential areas of failure, small-scale model qualification, design and execution of experiments, definition of operating parameter ranges and process validation acceptance criteria followed by manufacturing-scale implementation and process validation. The impact of operating parameters on product quality attributes and process-performance parameters is established in statistical experimental designs and applied to the execution of process characterization studies.

Process characterization experiments are then used to define the proven and acceptable range and classification of operating parameters, leading to a consistent product and optimized process. [Figure 3: Application of Analytical Methods]

Finally, successful implementation and validation of the process in the manufacturing facility, and subsequent commercialized manufacturing, verify that the approach taken is suitable for the development, scale-up and operation of the manufacturing process. [7]

The implementation of the QbD principles will almost always be completed during the phase II clinical process. Process standards for manufacturing and validation tests will then be established during phase III clinical trials, including product testing and audits to guarantee that the product manufactured after the technology transfer meets all predetermined standards and requirements.

Furthermore, implementing QbD principles in the technology transfer process facilitates the robustness of the final manufacturing process. It also aids continued improvements and offers consistency in the process across multiple facilities. [8]

5.0 Emphasis on Communication
Communication between transferring parties is one of the most significant elements of successful technology transfer. As part of a robust communication approach, members of each of the transferring parties need to be confident and keenly aware of their roles, scope and responsibilities. Failure to properly develop a robust communication strategy may cripple the entire transfer process, ultimately wasting millions of dollars.

6.0 Growth of the ADC Market
The recent approval of two ADCs—ado-trastuzumab emtansine (Kadcyla®; Genentech/Roche) and brentuximab vedotin (Adcetris®; Seattle Genetics)—and more than 30 clinical trials programs is making the market for ADCs especially interesting for CMOs seeking to expand their business. [9][10] Currently, large oncology-focused pharmaceutical and biopharmaceutical companies are heavily investing in this market. Experts believe the market for ADCs will grow exponentially. According to a new report published in March 2014, the ADC market is anticipated to reach US$3.45 billion by 2018. [11]

One reason for the expected growth is that companies are able to turn existing antibody therapies into ADCs, thus extending patent life to make the original antibody more profitable. Another scenario involves drug developers “resurrecting” ineffective antibodies and turning them into successful ADCs. Yet because of their complexity, the manufacturing and production of ADCs offers unique challenges for CMOs as well as government/academic and biopharmaceutical companies seeking CMO partners to help them in the manufacturing of their ADCs. Meeting the challenges of this process is no easy path. To take the process from the bench to the clinic requires a dedicated and robust approach to technology transfer.

7.0 Complexity of ADCs
ADCs are composed of a cytotoxic drug conjugate linked to a monoclonal antibody or antibody fragment, and are designed to combine better tumor penetration and killing properties with fewer side effects for cancer patients. They show high efficacy as cancer therapeutics. [Figure 4: Antibody-drug Conjugates: A Variety of Chemistries]

FIG_4_SAFC_Wooge

 

Conventional chemotherapy is designed to eliminate fast-growing tumor cells. It can, however, also harm healthy proliferating cells, which causes undesirable side effects. [12] In contrast, ADCs are designed to increase the efficacy of therapy and reduce systemic toxicity, often seen with small-molecule drugs.

The concept of ADCs is not new. Since the late 1970s, drug developers have been working on the realization of targeting drugs. The development of monoclonal antibody technology by Köhler and Milstein in 1975 was an important step, steering the way to develop highly selective antitumor therapeutics that ultimately resulted in the development of immune-conjugates or antibody-drug conjugates. [13]

The unique targeting properties of monoclonal antibodies have made it possible to conjugate or link them to radionuclides, cytotoxic agents and enzymes, and use them in therapeutic and imaging applications.

8.0 Overcoming Clinical Barriers
ADCs are generally more effective in the treatment of hematological or liquid cancers. To be successful as a therapeutic treatment in solid tumors, ADCs overcome the barriers to penetration within tumor masses, antigen heterogeneity, conjugated drug potency and efficient drug release from the antibody inside tumor cells. [14]

Unfortunately, early trials with, for example, cBR96-doxorubicin (also known as SGN 15; Seattle Genetics), a tumor-specific monoclonal antibody linked to doxorubicin (Adriamycin; Pfizer), showed little clinical efficacy, poor pharmacological parameters and problems with toxicity. [15] cBR96-doxorubicin showed limited clinical antitumor activity in metastatic breast cancer as well as glioma.

The gastrointestinal toxicities, on the other hand, were considered to be the result of binding of the agent to normal tissues expressing the target antigen, compromising the delivery of the immune-conjugate to the tumor sites. [16] [17]

Since the early failures, better antibody engineering and production combined with improved target selection, cytotoxic synthesis and a deeper understanding of conjugation chemistry have led to a number of successful drug candidates entering clinical trials.

The first ADCs contained well-known cancer therapeutics including doxorubicin and methotrexate. But today most ADCs include vastly more potent and highly toxic cytotoxins, including monomethyl auristatin E (MMAE) and monomethyl auristatin F (MMAF), or microtubule-depolymerizing maytansinoid derivatives such as DM1 or DM4, or calicheamicins, a class of enediyne antibiotics and duocarmycin analogs. Other (experimental) cytotoxins in trial drugs can include derivatives of pyrrolobenzodiazepines antibiotics or PBDs.

The first ADC to receive market approval was gemtuzumab ozogamicin (Mylotarg®, Wyeth/Pfizer). The drug, a monoclonal antibody to CD33 linked to a cytotoxic agent from the class of calicheamicins, was approved in 2000. [18] However, due to unacceptable adverse events, Pfizer withdrew the drug from the market in 2010.

9.0 Production of ADCs
The complexity of the production and manufacturing as well as the technology transfer between the development partners is enhanced by the complexity of the product itself and the required processes involved in production.

The production of ADCs is based on a mix of biotechnology and synthetic chemistry. [FIG 4: Antibody-drug Conjugates: A Variety of Chemistries] Hence, manufacturing requires both biologics and small-molecule manufacturing capabilities. Mammalian cell cultures are used to develop the monoclonal antibodies, and synthetic chemistry is used for the manufacturing of linker and the cytotoxic component.

While the general objective in the manufacturing of biologics is to have a simple supply chain—making technology transfer easier—this may not always be possible in the manufacturing of ADCs. One reason is that the manufacturing of ADCs may involve multiple players in multiple locations. For example, while one company may manufacture antibodies and the highly potent cytotoxin, it may depend on a second company to provide the linker technology (offering a variety of technologies including non-cleavable, peptide-cleavable, disulfide-cleavable and acid-cleavable). A third company may be asked to manage the conjugation. While today many CMOs are able to offer most parts of the production and manufacturing of ADCs, no one is able to offer a single-source solution.

10.0 Managing the Process
From a production and manufacturing perspective, managing the various steps of building an ADC is complex. The safe production and manufacturing of ADCs is technically challenging because it involves the coupling of a biologic to a chemical in which both need to retain their original activity. In addition to the standard impurity and stability tests, CMOs need to be able to show that the actual conjugation process has worked.

While conjugation of biological molecules to nonhighly potent active pharmaceutical ingredients is an established technology in anything ranging from the delivery of vaccines to in vivo diagnostics, conjugation with highly potent molecules is much more complex. One of the main reasons for this complexity is the requirement of containment when dealing with cytotoxic molecules—an environment decidedly different from that for biomolecules. The handling of the cytotoxins, for example, necessitates manufacturing in a highly containment aseptic biological manufacturing facility to limit occupational exposure levels.

Furthermore, the complexity of the manufacturing of ADC involves mammalian expression systems as well as bioreactor systems. To be within the good manufacturing practice (GMP) guidelines, these production mechanisms require extensive upstream as well as downstream processes.

But the complexity does not only involve the production and manufacturing environment. [19] In practice, it has been very difficult to control the site of conjugation as well as the final stoichiometry. For example, the currently approved ADCs are produced by conjugation to surface-exposed lysines (ado-trastuzumab) or partial disulfide reduction and conjugation to free cysteines (brentuximab vedotin). These stochastic modes of conjugation lack “control,” which leads to heterogeneous drug products with unpredictable and varied numbers of drugs conjugated across several possible sites.[9][10]

The proper coupling of the antibody and cytotoxin is crucial, since it significantly influences the ADCs’ efficacy, safety, stability and pharmacokinetics. All elements must function as intended (both for the separate biologic and cytotoxin as well as the final conjugated combination), a concern that has become an important focus for regulators as well as drug developers and their commercial (CMO) partners.

The example of gemtuzumab ozogamicin illustrates the importance. In a 2001 article published in Clinical Cancer Research, Peter Bross and Julie Beitz et al. note that approximately 50 percent of the antibodies were not linked to the cytotoxin calicheamicin derivative. [20] And because the relationships between the site of conjugation and the extent of drug loading has an important effect on efficacy, safety, pharmacokinetics and immunogenicity of the drug, the authors suggested that the reliability of gemtuzumab ozogamicin should be questioned.

11.0 Technology Moves Forward
Over the last decade, researchers have improved the conjugation chemistry process. As a result, site-specific conjugation, designed to create a more homogeneous product, is now among the novel technologies being adopted.

One team recently published their success in developing a robust platform for rapid production of ADCs with defined and uniform sites of drug conjugation. Their results showed an ADC that proved highly potent in in vitro cell cytotoxicity assays. [21]

For CMOs involved in the manufacturing and production of ADCs, it is important to truly understand the complex technology before getting involved. But with new cytotoxic compounds and ongoing development of linker chemistries, the possibilities will be virtually limitless.

While most drug developers are focusing on oncology, a growing number of researchers are exploring the opportunities of ADCs in other therapeutic areas, including autoimmune and inflammatory disease. One company, Intellect Neurosciences Inc. (New York, NY), for example, develops innovative approaches aimed at arresting or even preventing Alzheimer’s disease and other neurodegenerative diseases with ADCs targeting amyloidogenic proteins with additional neuroprotective properties by combining them chemically with a small molecule such as an antioxidant. [22]

12.0 Conclusion
Pharmaceutical development practices based on GMPs established by the FDA in 1978, and QbD principles established more recently, offer a robust mechanism to ensure that the equipment, processes and people in ADC development do what is expected and required to produce high-quality results. [23]

This includes a system for documenting expectations, as well as the results of testing to prove those expectations are satisfied. Furthermore, such a process provides the tools for investigating and correction if unanticipated deviations occur, as well as documenting them in a controlled and logical fashion. The focus is on excellence, and the same is true for technology transfer.

With the increasing complexity of pharmaceutical products and biologics, a properly structured, robust technology transfer process is crucial. Furthermore, to succeed in this unique market, CMOs need to demonstrate a systematic, streamlined approach to the entire ADC project.

Today most CMOs have become actual partners of the technology originator and industry partners they work with. They are no longer considered just a vendor of specialized services. This has consequences for the process in which novel technology is transferred between parties, including a set of regulatory implications. [5]

Only a select number of established and experienced CMOs will be able to meet the criteria to aid in the flawless transfer of technology for complex projects such as the development, production and manufacturing process of ADC.

Given the involvement of multiple industry partners participating in the development and manufacturing of ADCs, each partner needs to understand the product and technology needs, and be able to clearly recognize how to translate this into appropriate specifications. Furthermore, all partners need to be able to identify the process and understand how to make it robust upon scale-up to deliver on time. This is especially the case when one drug-developing originator works with multiple industry and CMO partners.


 

Cynthia Wooge, Global Strategic Marketing, SAFC | Successful Strategies in the Development and Technology Transfer of Antibody-Drug Conjugates |

Received April 24, 2014 | Accepted May 1, 2014 | Published online May 2, 2014 | ADC Review / Journal of Antibody-drug Conjugates. | doi: 10.14229/jadc.2014.5.2.001

Creative Commons License
This work is published by InPress Media Group, LLC (Successful Strategies in the Development and Technology Transfer of Antibody-Drug Conjugates by Cynthia Wooge) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed.Permissions beyond the scope of this license may be available at http://adcreview.com/about-us/permission/.

The post Successful Strategies in the Development and Technology Transfer of Antibody-Drug Conjugates appeared first on ADC Review.


Challenges and Strategies for the Downstream Processing of BiSpecific Antibodies (BsAbs)

$
0
0

Abstract
The overall trend in the biotherapeutics arena includes transitioning towards molecules that have higher value, potency, and thus smaller production volumes. Antibody Drug Conjugates (ADCs) and bispecific monoclonal antibodies (BsAbs) are two such product classes. Neither of them are new- they have been studied for many years. [1].

ADCs utilize antibodies as means to guide drugs to specific targets, where as a bispecific monoclonal antibody (BsAb) is composed of fragments of two different monoclonal antibodies that binds to two different types of antigens. The innovative research, and the advances in the field of bispecific antibodies/ADCs, and in turn bispe­cific antibodies that are themselves ADCs, holds great promise for the future develop­ment of therapeutics for a variety of diseases. [1]

Baeuerle and Raum present an extensive review of BsAbs in their article. [2] BsAbs are most commonly used for cancer immunotherapy where they are targeted to simultaneously bind to a cytotoxic cell as well as a target tumor cell to be destroyed. Targeting two antigens simultaneously can be a promising approach. [3] BsAbs can be expressed in mammalian and non-mammalian systems (example of the latter could be two Fab antibody fragments expressed using a microbial platform, and linking them through a chemical cross linker).  The purification of these molecules can be complex and a platform approach is not always feasible for the downstream processing. These molecules can be unstable, diverse in their make-up, have aggregation tendencies, may or may not bind to Protein A, and could be of varied sizes, with varying impurity profiles. This paper will include a review of the challenges associated with the manufacturing of BsAbs in mammalian cell cultures, and the strategies that can be implemented to overcome those challenges.


1.0 Introduction
Monoclonal antibodies closely resemble the naturally occurring immune response antibodies. As mentioned earlier, Bispesific antibodies (BsAb) are antibodies with dual specificity in their binding arms; they usually do not occur in nature and have to be created through the use of recombinant technology or somatically fusing two hybridomas (hybrid hybridoma/quadroma) or by chemical means. Quadroma technology is one the earlier methods used for production of bispecific antibodies. The antibodies secreted by quadroma including the bispecific antibody closely resemble conventional monoclonal antibodies, their molecular weight is approx 150kDa and they are relatively stable molecules. In addition to the dual-specific antigen binding fragment (Fab), these antibodies contain an Fcγ part, and thus can be considered trispecific. Triomab® (Trion Pharma) is one of the most advanced bispecific antibody format produced via improved quadroma approach: it uses the somatic fusion of a murine and a rat hybridoma cell line, expressing monoclonal antibodies with two IgG subclasses selected for their preferential pairing.

Fig_1_2014.6.6.001

Figure 1: Examples of BsAbs

Another advanced format of bispecific antibody is BiTE® (Micromet/Amgen) – bispecific T-cell engagers. These antibodies are single polypeptide molecules of ~55kDa produced by recombinant linking the 4 variable domains of heavy and light chains required for 2 antigen-binding specificities, they are capable of efficiently redirecting T-cell cytotoxicity against various target cells without any requirement for pre- or co-stimulation of effector T cells.[4]

A competing format of bispecific antibody called DART™ (“Dual-Affinity Re-Targeting” by MacroGenics) is based on the 2 polypeptide chains associated noncovalently in the diabody capable of targeting multiple different epitopes with a single recombinant molecule. The DART™ platform has been engineered to accommodate virtually any variable region sequence in a “plug-and-play” fashion with predictable expression, folding, and antigen recognition [5].

There are around 40 different formats of bispecific antibody in development by industry at the moment [6]. These formats include tandem single-chain variable fragment (scFv), diabodies, tandem diabodies, two-in-one antibody, and dual variable domain antibodies (DVD-Ig) [7].

The primary challenges in bispecific antibody production include chemical manufacturing control issues, production yield, homogeneity and purity. While production of small amounts is typically straightforward, cost-effective manufacturing at large scale could require major efforts.

Biological activity of antibody-based therapeutic molecules is closely related to their chemical and structural stability. Degradation may occur at various stages of the antibody life cycle – from production/purification through formulation, storage and delivery. Two main categories of degradation are physical and chemical. Aggregation is the most common type of physical degradation and when it occurs the monomeric antibody units bind to each other forming dimers, trimers, tetramers or even higher molecular weight aggregates size of which could range from nanometers to microns. Aggregation could be induced by various stress factors antibody faces during its life cycle – temperature change, freeze/thaw, mechanical stress (agitation, pumping, filtration, filling), pH/conductivity change etc. Chemical degradation occurs via oxidation, deamidation, isomerization, cross-linking, clipping and fragmentation. Successful production of antibody based therapeutic requires careful assessment of the various degradation pathways possible for a molecule (both physical and chemical) and implementing control over those pathways [4].

Bispecific antibodies have an increased tendency to form high-molecular-weight aggregates compared to the parental immunoglobulins. The tendency to aggregate is often concentration dependent: in the example of a modular IgG-scFv bispecific antibody increase in aggregates up to 50% was found at concentration of 5 mg/ml. It was shown that the introduction of a VH-VL interchain disulfide bond to stabilize scFvs could help with preventing or reducing aggregation of the bispecific molecule to levels below 5% [8].

Fig_2_JADC_2014.6.6.001

Figure 2: The general schematic for a BsAb manufacturing process

2.0 Harvest Clarification
The vast majority of the current therapeutic antibodies are still produced in mammalian cell lines in order to reduce the risk of immunogenicity due to non-human glycosylation patterns. Bispecific antibodies without any glycosylation could be successfully produced in bacteria.

For bispecific antibodies expressed in mammalian cells (most commonly with CHO and  PER.C6® cell lines, HEK293, BHK and NS0 being also used), cell culture conditions mimic those used for monoclonal antibody processes.  As such, harvest clarification methods are also similar. For process volumes ≤ 2,000L, companies often employ normal flow filtration for primary and secondary clarification.  Depth filtration is the most common normal flow filter used at this step.  For batch sizes greater than 2,000L, it is more economical to utilize centrifugation for primary clarification and normal flow filtration (using depth filtration) for secondary clarification.  The use of depth filters can also reduce impurity (i.e. HCP and DNA) levels, which can alleviate some of the strain on the downstream purification steps.  As titers and cell densities increase, the use of flocculation polymers or acid precipitation is becoming more common at harvest.  The addition of flocculants (typically cationic polymers) or acid to the bioreactor prior to harvest can lead to lower impurity levels and can increase secondary clarification filter capacities (post centrifugation).   For some bispecific antibodies, due to their potency, titers can be lower than standard Mabs, which can influence critical parameters (i.e. cell densities, viability, particle size distribution) that can affect the harvest clarification process.

3.0 Chromatography
As in the case of monoclonal antibodies the next unit operation after clarification in the downstream process of a bisepecific antibody molecule is the capture step.  In the case of full-length bispecific IgG molecules or IgG-like BsAbs (i.e. those containing the Fc-region of an antibody) this initial chromatography step can be done using a Protein A media.  BsAbs produced by quadromas assemble randomly and produce some bispecific molecules as well as the parent monospecific IgG types. These molecules can be initially purified by Protein A but other chromatographic methods are needed to separate the target bispecific molecule from the product related impurities.

Full length BsAbs can also be generated by expressing individual antibodies separately.  For example, the two antibodies can be mixed together under optimized chemical conditions to produce the BsAbs [9], use knobs-into-holes technology [10] or produce half antibodies separately to name a few approaches. In these cases the use of Protein A for capture is the same as for traditional monoclonal antibodies. Each antibody or half-antibody can be purified in this single step to a level that is enough to perform the in-vitro generation of the BsAb molecule.

The capture step of bispecific antibody molecules that do not contain the Fc region of an antibody has been achieved using a different chromatography types.  Some BsAbs of this type are engineered to contain a histidine tag which allows the use of immobilized metal affinity chromatography (IMAC) for the initial chromatography step [11].

Other small bispecific antibody molecules containing the variable region of the kappa light chain can be captured using Protein L affinity chromatography [12][13].  Although the binding capacities for IMAC and Protein L are generally lower than for Protein A it is important to note that the molar capacities are not as low because the molecular weight of these molecules could be 2-3x smaller than a monoclonal antibody.

When affinity chromatography methods are not an option for capture the use of ion exchange (IEX) or hydrophobic interaction chromatography (HIC) has also been reported [14]. The use of IEX or HIC for capture can be challenging to optimize for bispecific antibodies as it is for their monoclonal counterparts.  Although acceptable purities can be achieved using IEX or HIC for capture the process development efforts required to achieve the same levels as with affinity chromatography modes are generally higher.

Additional purification and polishing chromatography steps are required after the initical capture of BsAbs.  In cases where a biologically derived ligand was used as capture (Protein A or Protein L) these process-related impurities have to be removed in these steps. Product-related impurities like aggregates and fragments need to be reduced to acceptable levels and in the case of full-length BsAbs the parent monoclonal antibodies or unwanted BsAbs variants need to be separated as well.  In cases where the charge difference between the BsAb antibody and the other full-length variants is relatively large cation exchange chromatography operated in bind and elute mode can be sufficient to achieve the target purification [15].

Product-related impurities such as aggregates are generally present at a higher concentration than in a monoclonal antibody process particularly for bispecifics that are not full-length antibodies. These impurities can have very similar physicochemical characteristics to the product such as hydrophobicity and net surface charge.  To remove this impurities it is required to employ one to three IEX or HIC steps but given the similarities between the molecules the resolution for these chromatography steps is generally lower than in a MAb process.  Lower yields may need to be sacrificed to achieve the target purity and optimization of these steps can be more difficult than in a MAb process.  Alternative methods of operation these chromatography types including multicolumn countercurrent solvent gradient chromatography have shown improvements in yield without sacrificing purity [16].  However, the use of multicolumn processing strategies has not been implemented at large scale in the biomanufacturing industry for a commercial product.

4.0 Sterile Filtration – Post Harvest Clarification and Process Intermediates
Upstream sterile filtration in bispecific antibody processing is similar to that of monoclonal antibody processing.  Media components influence the sterile filtration of bioreactor supplements and process intermediates (post primary and secondary clarification).  Symmetric PVDF membranes are better suited for the sterile filtration of PEG and hydrolysate-containing media types.  Asymmetric PES membranes offer higher fluxes and capacities than symmetric membranes and are recommended for downstream purification process intermediates.  Care and attention should be paid to sterile filtration at each step, with respect to product recovery, capacity and operating flux.

5.0 Virus Filtration
Some bispecific antibody molecules are similar in size to virus filter membrane pore sizes (20nm), which can lead to significant process challenges.  For these types of molecules, asymmetric PES parvovirus filter should be evaluated first, with and without prefiltration.  If product recovery is an issue, regulatory agencies have accepted the use of non-parvovirus filters (i.e. Planova® 35N).  For smaller bispecific antibody molecules, asymmetric PES parvovirus filters are recommended.  The use of a membrane-based prefilters should be used to normalize the feed (with respect to aggregate and impurity levels), increase capacity and reduce operating costs for this process step.  Special care should be taken when outlining the virus validation step as this will dictate achievable process loadings.

6.0 Ultrafiltration and Diafiltration
In the view of an increasing trend towards the development of dosage forms for alternative routes of administration, in particular the subcutaneous (sc) route, the final ultrafiltration and diafiltration (formulation) step in bispecific antibody processes can present unique challenges due to the high viscosity of the highly concentrated product. Protein-protein interaction at high concentration is a major factor that may influence opalescence and viscosity. New product offerings from filtration companies designed to address formulation of high concentration Mabs can be well applied to bsAb processes as well. Specifically, EMD Millipore’s new ‘D’ screen device allows for successful formulation step while remaining within customer’s designated manufacturing process pressure range.  Cellulosic-based membranes are often used at this step for their low binding characteristics.

7.0 ADC Considerations
Bispecific antibodies are prevalent in the area of cancer immunotherapy. This is particularly relevant for bispecifics engineered from scFv (single chain variable fragments).  To increase drug targeting and in-vivo half-life and decrease side effects, the BsAb can be coupled with a cancer drug (cytotoxin), which then binds to antibodies in the body and attaches to the surface of cancer cells.  The cytotoxic drug is then released and can attack the cancer cells.  This realm of biotherapetuics is increasing and to date there are 3 ADC’s that have received market approval and many more in company’s drug pipelines.

 

Fig_3_JADC_2014_6.6.001

Figure 3: Example of an ADC

There are critical process considerations necessary in the production of ADC’s.  Post production (as outlined in this paper), the bispecific antibody is conjugated with the cytotoxin and requires further purification (either through chromatography or Ultrafiltration/diafiltration).  After conjugation, the process needs to be closed and is regulated by CDER (unlike other protein conjugations).  At this stage in the process, the batch sizes are small so some manufacturing can be done in a hood, but this can be cumbersome.  The use of disposable technology and closed purification systems (TFF and chromatography) is recommended.  Due to the high level of toxicity of the chemicals used in these processes, it is recommended to check the chemical compatibility and leachables/extractables profile of the disposable technologies used (particularly the films).

8.0 Conclusions
Modified Mabs such as BsAb and ADC molecules are generating increased interest as the demand for target therapeutics with improved efficacy continues to grow.  These molecules present some challenges that are different than the “traditional” processes used for the development and manufacture of monoclonal antibodies.

The the BsAb can also be coupled with a cancer drug (cytotoxin), which then binds to antibodies in the body and attaches to the surface of cancer cells (ADC). This assists with further increasing the drug targeting.

For BsAbs, the general purification process is similar to Mabs. However there are some unique challenges. Each of the steps in recovery and purification must be optimized based on the process requirements and the molecule characteristics in order to ensure a robust, stable, and scalable BsAb production process.

Acknowledgments: The authors would like to thanks Sladjana Tomic, David Beattie, Martin Zillman, and Mark Wagner for their help.


June 6, 2014 | Claire Scanlan, Elina Gousseinov, Alejandro Becerra-Artega, Ph.D, Ruta Waghmare, Ph.D | Corresponding Author Ruta Waghmare, Ph.D; ruta.waghmare@emdmillipore.com | doi: 10.14229/jadc.2014.6.6.001

Received May 8, 2014 | Accepted May 29, 2014 | Published online June 6, 2014

Creative Commons License
This work is published by InPress Media Group, LLC (Challenges and Strategies for the Downstream Processing of BiSpecific Antibodies (BsAbs) by Claire Scanlan, Elina Gousseinov, Alejandro Becerra-Artega, Ph.D, Ruta Waghmare, Ph.D) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at http://adcreview.com/about-us/permission/.

The post Challenges and Strategies for the Downstream Processing of BiSpecific Antibodies (BsAbs) appeared first on ADC Review.

The Clinical Landscape of Antibody-drug Conjugates

$
0
0

Abstract
Introduction: Antibody drug conjugates (ADCs) are a class of therapeutics that combine the selective targeting properties of monoclonal antibodies (mAbs) with potent cell killing activities of cytotoxic agents. Given rapid pace of progress in this field, it is important for drug developers to have a high level view of the landscape of ADCs in the clinic. This review analyzes ADCs tested in the field of Oncology. Trials are evaluated by cancer type, trial status, phase, and characteristics of the ADC.

Methods: Two databases were used to evaluate current clinical studies: ClinicalTrials.gov and TrialTrove. After cross-referencing the results from each database, a total of 238 unique clinical trials were identified and analyzed.

Results: The clinical testing of ADCs is currently being performed predominantly in hematological malignancies (n=146). Among these, leukemia is the leading indication tested (n=77). There are 89 trials in solid tumors, with breast cancer being the most abundant (n=39). A significant number of clinical trials are in phase II (n=83). There are 47 unique ADCs in clinical trials. Among these ADCs, tubulin inhibitors are the most common warheads used. These are mainly the maytansinoids (n=22) and auristatins (n=16).

Conclusion: Our visualization of the clinical landscape of ADCs will help foster the design of future research efforts in this area of great clinical and scientific interest.


1.0 Introduction
The increasing global incidence of cancer and associated resistance patterns necessitates new treatment modalities to serve the patient population [1] [2] [3]. A number of methods are currently employed for cancer treatment including surgery, chemotherapy, hormonal therapy, radiation therapy, adjuvant therapy, cancer targeted therapies, and immunotherapy [4] [5]. The use of biologics in immunotherapy is of particular value to cancer treatment due to the selective nature of monoclonal antibodies (mAbs). These mAbs are able to bind to cells expressing a specific target antigen with high affinity and potentially decrease off-target toxic effects [6] [7]. The biotechnology industry is investing approximately one quarter of its resources in the development of mAbs while also devising next generation platforms to increase both drug efficacy and safety [6].

A number of strategies are currently available to utilize the properties and enhance the functionality of mAbs by coupling diverse moieties to the antibody. These include antibody-radionuclide conjugates, antibody-RNA conjugates, antibody-antibiotic conjugates, antibody-protein conjugates, antibody-fluorophore conjugates, antibody-enzyme conjugates, antibody-cytokine conjugates, and antibody-drug conjugates [5] [7] [8].

Antibody-drug conjugates (ADCs) provide a unique platform whereby naked mAbs are enhanced through conjugation with cytotoxic small molecule drugs. ADCs consist of three main components: the monoclonal antibody (mAb), the cytotoxic molecule (also referred to as the warhead), and the linker. As single agents, mAbs have greater specificity and a more favorable safety profile, but have limited antitumor responses [5] [9]. Small molecule cytotoxins have potent cell-killing activity, but also have significant toxic effects [5]. The linker is the molecular bridge that conjugates the small-molecule cytotoxin to the mAb. The linker and warhead together are termed the payload [Figure 1; Click to enlarge] [10].

Figure 1
Figure 1

When combined, ADCs facilitate the delivery of highly potent cytotoxic molecules directly to tumor cells expressing unique antigens that are specific to the mAb. As a result, ADCs also have the potential to increase the therapeutic window of non-selective cytotoxic agents [5] [7] [9] [10]. Clinical evaluations of ADCs compared to unconjugated mAbs have demonstrated better response rates to the same cellular targets in similar patient populations. These results reinforce the use of ADCs as a promising treatment modality for use in Oncology [5] [11]

The ADC complex is engineered to remain stable after administration until the cellular target is reached [5]. The initial step in the ADC mechanism of action is the binding of the mAb to the target antigen on the cancer cell. Once the ADC is localized to the cell surface, the entire complex consisting of the mAb and payload is internalized through receptor-mediated endocytosis. Upon internalization, the ADC is trafficked to intracellular organelles where the linker is degraded, causing the warhead to be released inside the cell [5] [9] [10] [11]. Subsequently, the warhead disrupts cell division via a cytotoxin-specific mechanism, which ultimately causes cell cycle arrest and apoptosis. Two mechanisms of cell cycle arrest are currently utilized, one mechanism pertains to inhibition of tubulin polymerization as seen in auristatins and maytansines while the other mechanism is based on direct binding to DNA and subsequent inhibition of replication as seen in calicheamicins, duocarmycins, and pyrrolobenzodiazepines (PBDs) [10] [12].

In this review, ADCs used in clinical trials are evaluated in open, completed, closed, or terminated studies. Through evaluation of the global ADC portfolio available in clinical trial databases, it is the intention of this review to create a better understanding of the current clinical landscape.

2.0 Methods

Data Collection 
The current review evaluates data on clinical trials using ADCs as a treatment regimen in the Oncology setting. The databases used to select clinical trials included the clinical database of the National Institutes of Health (www.ClinicalTrials.gov) and TrialTrove®(www.citeline.com/products/trialtrove/). The most updated search of the databases was completed on March 4th, 2014.

ClinicalTrials.gov 
ClinicalTrials.gov [13] is a database maintained by the National Library of medicine (NLM) at the National Institutes of Health (NIH) which contains information on clinical studies provided and updated by the sponsor or principal investigator of the study. The search terms used in this database contained “‘Antibody Drug Conjugate’ OR ‘ADC’ OR ‘Antibody Drug Conjugates’” with “All Studies” selected for recruitment, study results, and study type. The phases selected for the search included “Phase 1, Phase 2, Phase 3, and Phase 4.” A total of 521 trials were generated from ClinicalTrials.gov given the search criteria listed above.

Of the 521 trials, those that were not testing in the oncology therapeutic area (n=281) were eliminated. The studies were further filtered to ensure that ADCs were used in the cohorts as single-arm, in combination, or in comparison with another drug or a number of drugs. Studies which did not test ADCs (n=172) were eliminated, resulting in 68 clinical studies testing ADCs in oncology. All studies taken from this database had NCT numbers as trial identifiers.

TrialTrove 
TrialTrove® [14] is a Citeline product which comprehensively documents pharmaceutical clinical trials in eight therapeutic areas and 180 disease settings. To locate relevant clinical trials on TrialTrove®, the search criteria were restricted to “Oncology” as the therapeutic area and “Antibody Drug Conjugates” or “ADC” as the therapeutic class. Trial phases “I, I/II, II, II/III, III, and IV” were selected as well as “Open, Closed, Temporarily Closed, Completed, and Terminated” for the trial status. A total of 345 trials were generated from the TrialTrove® search given the search criteria listed above.

The 345 studies were further filtered to ensure that ADCs were used in the cohorts as single-arm, in combination or in comparison with another drug or a number of drugs. Studies which included either fusion proteins or immunotoxins (n=23) were removed as these biologics are not considered to be ADCs, resulting in a total of 322 evaluable clinical trials. Additionally, studies that did not have NCT numbers linking them to the NIH database (n=89) were eliminated, resulting in a total of 233 trials.

Data Analysis 
Two independent reviewers analyzed the data obtained from each database and both agreed that the final list of trials fit the criteria for analysis. The lists of trials from each database, 68 trials from ClinicalTrials.gov and 233 trials from TrialTrove®, were cross-referenced using the NCT numbers as consistent trial identifiers between the two study sets. Duplicate trials (n=63) were eliminated resulting in a total of 238 unique clinical trials to evaluate.

The 238 clinical trials were then classified according to oncology indications and cancer types explored in the study, phases of the trial, and drug characteristics based on conjugated warhead. While many trials test multiple cancer types in parallel in the same study, each cancer type was tabulated independently.

Additional study details documented in peer-reviewed journal articles, abstracts presented at conferences, or other electronic sources by the study sponsors were used to obtain specific information on the drug and/or trial as needed.

The final set of studies included trials in all phases (I, II, III, and IV), indications (hematological malignancies and solid tumors), cancer types, and trial statuses (open, completed, and terminated).

3.0 Results 
The current clinical landscape of ADCs consists of 238 clinical trials which have been classified and analyzed by indication and cancer types, phases, and novel ADC characteristics including warheads conjugated to the mAb.

Table 1
Table 1

ADCs are being tested in both hematological malignancies (HM) and solid tumors (ST). A variety of cancer types are currently explored in clinical trials for both HMs (Table 1; Click to enlarge) and STs (Table 2; Click to enlarge) as well as unspecified cancer types or trials where both indications are studied concurrently (Table 3; Click to enlarge). ADCs are predominantly tested in HMs with 146 of the 238 clinical studies. Myelogenous leukemias, both acute and chronic, (n=64) are the leading cancer types tested in HMs followed by non-Hodgkin lymphoma (n=49).

Table 2
Table 2

The majority of the remaining trials are tested in STs, consisting of 89 of the 238 clinical studies. Breast cancer (n=39) is the leading ST cancer type where ADCs are tested followed by clinical trials open to various solid tumors (n=14), lung (n=12), ovarian (n=10) and prostate (n=10) cancers. A pair of trials (n=2) analyze cancer types in both HM and ST with concurrent testing in non-Hodgkin lymphoma (NHL) and renal cancer. Finally, one trial is being tested in unspecified cancer types.

Table 3
Table 3

The clinical trials are also spread throughout stages of development (Table 4; Click to enlarge) with most of the ADC trials being studied in phase II (n=83) and a number of drugs in their early stages of development in phase I (n=77). Clinical trials in later stage development in phase III (n=28) are also ongoing with four ADCs being tested: inotuzumab ozogamicin, gemtuzumab ozogamicin (Mylotarg®), trastuzumab emtansine (Kadcyla®), and brentuximab vedotin (Adcetris®). Kadcyla®and Adcetris® have already gained approval by the Food and Drug Administration (FDA). Kadcyla® was approved in 2013 for HER2-positive breast cancer and Adcetris® was approved in 2011 for Hodgkin lymphoma and anaplastic large cell lymphoma (ALCL).

Table 4
Table 4

Of the 238 clinical trials, 47 unique ADCs are being tested ( Click to open Table 5) with the mAbs targeting a variety of cell surface proteins. There are 31 ADCs that have only been tested in STs, 11 ADCs only tested in HMs, and 5 ADCs that have been tested in both indications (Adcetris®, lorvotuzumab mertansine, MDX-1203, pinatuzumab vedotin, and vorsetuzumab mafodotin).

 

Screen Shot 2014-11-17 at 8.56.37 AM
Table 5

Of all the ADCs, Mylotarg® is the leading drug tested, consisting of 62 of 238 clinical trials. This is followed by Adcetris® and Kadcyla® with 47 and 34 of the 238 clinical trials, respectively.

Additionally, 11 unique cytotoxins are used in conjugation to the 47 ADCs (Figure 2; Clicl to enlarge). Predominantly, tubulin inhibitors Monomethyl Auristatin E (MMAE) (n=16), Monomethyl Auristatin F (MMAF) (n=6), Maytansinoid DM1 (DM1) (n=7), and Maytansinoid DM4 (DM4) (n=9) are the most common warheads. The tubulin inhibitors comprise 38 of the 47 ADCs. The 9 remaining ADCs are conjugated to calicheamicin (n=2), topoisomerase-I inhibitor/irinotecan metabolite (SN-38) (n=2), doxorubicin (n=2), duocarmycin (n=1), pyrrolobenzodiazepine (PBD) (n=1), and other or unknown cytotoxins (n=1).

Figure 2
Figure 2

4.0 Discussion 
It is clear that the scientific potential behind ADCs and the clinical need form this class of drugs in oncology are both substantial. Our goal with this review was to provide a complete visualization of the clinical landscape of ADCs that can foster the design of future research efforts and treatment options for patients with cancer. Several useful insights for clinical trial designers are readily apparent in this analysis.

First, there is a strong separation of ADC-based research in the clinic that tends to divide HM and ST indications into different trials. Studies tend to explore cancer types exclusively in either HM (n=146) or ST (n=89). Only three trials have combined exploration in both indications – two of which are specifically designed for NHL and renal cancer. This disparity may partly be due to the technical difficulties of mixing HM-based trial designs with ST-based trial designs. The biology of hematologic-based malignancies may be so different that there is little overlap of the ADC target in solid tumors. Additionally, the organization of regulatory agencies which separates HM and ST, particularly the FDA, may make such mixed studies extremely challenging to implement.

Second, the quantitative breakdown of HM versus ST trials is curious. While there are a numerically larger number of total clinical trials in HM versus ST (146 versus 89), a full 62 of the 146 HM trials involve just one ADC, Mylotarg®. Nearly half of the HM space is attributable to this ADC alone, irrespective of the other 15 ADCs being tested in HM. If you exclude Mylotarg® trials, there are nearly the same number of HM as ST trials, 84 versus 89.

Third, the ADC landscape reveals a considerable concentration of trial activity in the leukemias (77 of 146 HMs) and breast cancer (39 of 89 STs). There is clearly ample opportunity for development of ADCs in HMs and STs which are relatively unexplored, such as myeloproliferative disorders, melanoma, mesothelioma, or CNS, endometrial, or testicular cancers. All of these settings have two or fewer trials each and provide a potential opening for new ADCs, should appropriate targets be identified.

Fourth, the majority of clinical studies are currently in the early phase (I and II). There are a similar number of phase I trials in STs (n=40) compared to HMs (n=35). However, there are a significantly higher number of HM studies in the later stage (III). This may be biased by the initial early success of Mylotarg® in HM. This may also be due to the possibility of a higher failure rate of ST trials in early phases and the possibility that HMs are more tractable with ADCs compared to STs. Investigation of these possibilities is out of the scope of this review, but would be an interesting subject for future analyses.

Fifth, the current landscape of ADCs is obviously dominated by tubulin inhibitors as toxins, with 38 of the 47 ADCs conjugated to MMAE, MMAF, DM1, or DM4. This represents a potential opportunity for the use of cellular toxins with alternative mechanisms of action e.g. DNA-binding cytotoxins (calicheamicin, doxorubicin, duocarmycin, SN-38, and PBDs) when designing new ADCs for future studies.

In conclusion, the clinical landscape of ADCs provides a useful tool for all involved in oncology drug development. It will be exciting to see how this landscape evolves with the entry of new technologies, approaches, and targets.

Disclosures
Authors are employed by MedImmune, LLC. No funding was received in support of this article. 

Acknowledgements
The authors would like to express gratitude to David Jenkins, Jennifer McDevitt, and Mohammed Dar who have kindly reviewed the manuscript and provided valuable feedback. Additionally, we are grateful to Citeline for providing us with the permission to use their database in our analysis and thus, share our findings with the Journal.


August 1, 2014 | Sohayla Rostami, Ibrahim Qazi, PharmD, Robert Sikorski, MD, PhD | Corresponding Author Robert Sikorski, MD, PhD | doi: 10.14229/jadc.2014.8.1.001

Received June 30, 2014 | Accepted July 25, 2014 | Published online August 1, 2014

Creative Commons License
This work is published by InPress Media Group, LLC (The Clinical Landscape of Antibody-drug Conjugates by Sohayla Rostami, Ibrahim Qazi, PharmD, Robert Sikorski, MD, PhD) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.

Get Permissions

The post The Clinical Landscape of Antibody-drug Conjugates appeared first on ADC Review.

Addressing Dendritic Cells for Anticancer Immunity

$
0
0

1.0 Commentary 
By addressing the body’s own immune system the treatment of several severe diseases, e.g. infectious diseases, has become incredibly successful throughout the last centuries. Thus, it is highly desirable to adopt this concept to other diseases as well. Recently, cancer immunotherapy has become an alternative route to treat malignancies effectively without adverse side effects that are usually caused by classical cytostatics. In general, the immune system is considered to play a key regulation parameter in tumor progression, but its specificity and power to recognize and eliminate abnormal cells (tumors as well as metastases) may at the same time be utilized to combat cancer completely. However, selective tumor-antigen delivery to specialized antigen presenting cells of the immune systems would be required to guarantee anticancer immunity.

Among various immune cell populations, dendritic cells (DCs) are able to present external antigens effectively and thus, generate antigen-specific adaptive immune responses. Especially DCs equipped with the surface marker CD8 are able to cross-present such antigens to CD8+ T cells, which upon activation differentiate into cytotoxic T cells (CTLs) that provide antitumor immunity by selective killing of malignant cells. Fortunately, antigen-presenting CD8+ DCs are further equipped with the C-type lectin receptor DEC-205 that can be used as targeting structure to delivery antigens to those immune cell types exclusively. Moreover, a monoclonal antibody directed against extracellular DEC-205 epitopes entitled aDEC-205 is commercially available and was shown to deliver microcapsules to CD8+ dendritic cells. [1]

Yet, for efficient vaccination the co-delivery of multiple signals (e.g. antigens and immunostimulants) is necessary to the same immune cell type (e.g. DC) at the same time point. To that respect, multifunctional polymeric nanocarriers equipped with a targeting moiety seem to be ideal vaccine candidates for antitumor immunization. Due to its low immunogenicity polymers based on N-(2-hydroxypropyl) methacrylamide (HPMA) may be used for such purposes. Interestingly, access to multifunctional HPMA-based carriers could be found via reactive ester precursor polymers that – in a facile post-polymerization process – can be equipped with various multifunctionalities for a range of applications. [2] In fact, first attempts could even be made recently by highly specific glycosylsated tumor antigens ligated with a T-helper cell epitope to HPMA-based block copolymers for inducing humoral anticancer immune responses. [3] However, for the induction of a cellular immune response via CTLs the delivery of such antigens to CD8+ DCs would be favorable, too.

Tappertzhofen et al. have recently published a novel method how such multifunctional HPMA-copolymers can be attached to aDEC-205 antibodies mediating their delivery to dendritic cells of the immune system. In previous work they used HPMA-copolymers with pendant maleimide side groups that – after antibody modification with Traut’s reagent (2-iminothiolane) – could be applied to a ligation process, however, affording a certain degree of cross-linking of polymers with a few antibodies.[4] In their new approach they now used maleimide endgroup-modified heterotelechelic HPMA copolymers that guarantee no antibody cross-linking. [5] Instead, more carrier material could be attached to one single antibody affording so-called “star-like” topologies, which seems to be economically more advantageous in delivering several immunologically relevant components to one single cell (Figure 1).

Screen Shot 2014-10-22 at 11.01.40 AM

In their work Tappertzhofen et al. were able to directly compare the properties of the rather cross-linked aDEC-205 antibody polymer conjugates with the well-defined “star-like” constructs. Interestingly, binding and uptake studies with primary immune cell populations proved advantageous properties for the novel “star-like” conjugates.

For example, after incubation of these polymer-antibody conjugates to primary immune cells isolated from the bone marrow of mice, so-called bone marrow-derived dendritic cells (BMDCs), a specifically higher uptake of the “star-like” conjugates was found into CD11c+ BMDCs exclusively compared to the “cross-linked” conjugate constructs. FACS analysis data showed that this uptake was significant after 24 h compared to relevant control samples (polymers conjugated to non-specific IgG-isotype antibody or reference polymers, where the maleimide functionality was quenched with mercaptoethanol). Moreover, by confocal laser scanning microscopy (CLSM) imaging intracellular localization was found for both “star-like” and “cross-linked” conjugates to aDEC205 antibodies, but again the “star-like” species showed a higher selectivity: No uptake could be identified for their control samples, while for the conjugates using pendant maleimide side chains polymers could be found intracellularly even in those cases, where conjugates were synthesized with non-specific IgG isotypes or even for reference polymers only. To that respect, the end-group bearing maleimide polymers seem to avoid such unspecific uptake exclusively and thus, guarantee specific antibody-mediated uptake a much more selectively (Figure 2).

Screen Shot 2014-10-22 at 11.05.13 AM

Interestingly, in spleenic immune cell populations, where DEC205 is expressed primarily by interdigitating CD8+ DCs, an uptake of the aDEC205-HPMA polymer-conjugates could be found to dendritic cells too, and even among those heterogenic immune cell populations the “star-like” topology showed greatest selectivity compared to its “cross-linked” analogue. Besides, incubation studies with each polymer derivative at 37 and 4 ºC overnight showed that the aDEC205 mediated uptake process was highly energy-dependent and consequently, an active process of selective binding and internalization should be involved instead of non-specific adsorption to the cell membrane.[5]

2.0 Conclusion
To conclude, the comparison of antibody-polymer conjugates of different topologies (“cross-linked” versus “star-like”) demonstrated that the well-defined constructs derived from maleimide-endgroup modified HPMA copolymers prevent non-specific interaction with immune cells and guarantee antibody-mediated active delivery to the target cell type. By using aDEC205 as antibody the delivery of multifunctionalizable HPMA carrier polymers to CD8+ dendritic cells is possible enabling novel application especially for immontherapeutic purposes. [5] Considering the numerous possibilities of equipping the HPMA carriers with further tumor relevant antigens and immunostimulants, as recently shown by ligation of selective MUC1-derived glycopeptides antigens to HPMA block copolymers, [3] a combination with aDEC205 mediated delivery would guarantee antigen cross-presentation to CD8+ T cells and differentiation into specific cytotoxic T cells (CTLs) for selective antitumor response on a cellular level. As a result, such well-defined multifunctional HPMA-aDEC205 polymer conjugates can be considered as novel vaccine delivery platform towards antitumor immunity.


August 15, 2014 | Lutz Nuhn, PhD | Institute of Organic Chemistry, Johannes Gutenberg-University, Duesbergweg 10-14, D-55099 Mainz (Germany) | doi 10.14229/jadc.2014.8.15.002

Received August 5, 2014 | Accepted August 11, 2014 | Published online August 15, 2014

Creative Commons License
This work is published by InPress Media Group, LLC (Addressing Dendritic Cells for Anticancer Immunity by Lutz Nuhn, PhD) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.

Get Permissions


The post Addressing Dendritic Cells for Anticancer Immunity appeared first on ADC Review.

Downstream Processing Considerations for Antibody Variant Therapeutics

$
0
0

Abstract
Novel platforms such as antibody derivatives, peptide based therapies, gene and stem cell based therapies are gaining foothold in the market for several reasons, including the need for better Pharmacokinetics (PK)/ Pharmacodynamics (PD), improved potency against disease targets, ability to treat more than one aspect of a disease simultaneously, better and cheaper production processes, reduced side effects and the biosimilar cliff.

In this article, we will focus on 3 types of antibody derivatives- namely Bispecific Antibodies (BsAbs), antibody fragments (Fabs), and fusion proteins. We will include an overview of each and discuss the typical downstream processes, highlighting specific process challenges. Scale up considerations will also be included.


1.0 Introduction
Monoclonal antibodies (MAbs) continue to dominate in terms of the class of therapeutics for the biotechnology industry. However, the overall trend in the biotherapeutics includes transitioning towards molecules that have higher value and improved bioavailability. Traditional Mabs are altered to achieve this goal, and antibody variants such as antibody fragments (Fabs), bispecific monoclonal antibodies (BsAbs), and fusion proteins are being explored.

The term Fab Antibody fragment is self explanatory- it is the Fab fragment from the variable region of an antibody [1]. A bispecific monoclonal antibody (BsAb) is composed of fragments of two different monoclonal antibodies that bind to two different types of antigens [2]. Fusion proteins are produced from gene fusion techniques that allow the production of recombinant proteins featuring the combined characteristics of the parental products [3].

In terms of common expression platforms, Fabs can be expressed in both mammalian and bacterial expression systems. Bacterial expression system is more common for Fabs and they can be expressed in E.Coli as either inclusion bodies or soluble (soluble being more common). BsAbs and fusion proteins are more typically expressed in mammalian cell cultures.

A typical downstream process for these antibody variants consists of

 

 

Fig 1: Typical Downstream Process (click on image to enlarge). Bioreactor/Fermentor > Harvest/Lysis (if bacterial)/Clarification > Capture > Polishing > Virus Clearance (if mammalian cell line) > UF/DF > Final Sterile.

In this article, we outline some of the unique requirements and challenges posed by these antibody variants in terms of recovery, purification, and scale-up/process transfer.


2.0 Part 1 – Recovery

 

 

 

Fig 2: Recovery: Bioreactor/ Fermentor > Harvest / Lysis (if bacterial) / Clarification (click on image to enlarge)

The vast majority of the current therapeutic antibodies including BsAbs and fusion proteins are still produced in mammalian cell lines in order to reduce the risk of immunogenicity due to non-human glycosylation patterns [1]. However, Fabs are more commonly produced in bacterial (E. Coli) due to their smaller size and economic considerations. Bispecific antibodies without any glycosylation could be successfully produced in bacteria as well.

For mammalian cell cultures (used for BsAbs and fusion proteins), normal flow depth filtration can be used for primary and secondary clarification steps for process volumes ≤ 2,000L. Depth filtration has also been shown to assist with removal of impurities such as HCP and DNA and improve downstream filter and column capacities. As titers and cell densities increase, the use of agents such as flocculation polymers/ acid precipitation are becoming more common at harvest.

For bacterial expression systems, clarification is often one of the most challenging steps. For soluble proteins, microfiltration- tangential flow filtration (MF-TFF) is often used instead of normal flow filtration. However, centrifugation followed by normal flow filtration (NFF) can be evaluated. Typically the TFF yields higher product recovery and is more economical. There is also a re-newed interest in older technologies such as using DE as a body feed for clarification. With secreted proteins, whole cells are separated from the fermentation broth and the particle size is thus larger; as a result, microfiltration (using TFF), centrifugation and normal flow filtration are all viable options. Endonuclease agents can also be used prior to clarification to digest DNA and RNA and to aid in the efficiency of the clarification process [2].

For sterile filtration post clarification, the capacity is influenced by the bioreactor/fermentor media components. Symmetric PVDF membranes are better suited for the sterile filtration of PEG and hydrolysate-containing media types. Asymmetric PES membranes are also available and can be evaluated. Sterile filtration step should be optimized with respect to product recovery, capacity and operating flux.


3.0 Part 2 – Purification

 

 

Fig 3. Purification: Capture > Polishing > Virus Clearance (if mammalian cell line) > UF/DF > Final Sterile (click on image to enlarge)

Protein A, followed by cation exchange and anion exchange, can be successfully used for the purification of BsAbs, fusion proteins, or Fabs containing the Fc region. For molecules that do not contain an Fc region, capture is typically achieved using cation exchangers or mixed mode resins in bind/elute mode depending on the molecular characteristics of the target protein. A subsequent polishing step for improving the resolution generally follows the capture step. This polishing step could be ion exchange (IEX) or hydrophobic interaction chromatography (HIC) depending on the previous step. And sometimes a third chromatographic step is required, depending on the separation results from the previous steps. In addition to resins, membrane adsorbers are also used for the polishing steps in (usually) a flow-through mode.

Virus filtration is not needed for bacterial expression systems due to the absence of adventitious viruses. For the proteins expressed in mammalian cell cultures, demonstrating viral clearance is a regulatory requirement. Some fusion proteins or BsAbs can be similar in size to virus filter membrane pore sizes (20nm), leading to significant process challenges in terms of filter capacities and flux rates. For these types of molecules, asymmetric PES parvovirus filter should be evaluated first, with and without prefiltration. If product recovery is an issue, regulatory agencies have accepted the use of non-parvovirus filters [2]. For smaller molecules, asymmetric PES parvovirus filters are recommended. Membrane-based prefilters could be used to normalize the feed (with respect to aggregate and impurity levels), increase capacity and reduce operating costs for this process step [2]. Special care should be taken when outlining the virus validation step as this will dictate achievable process loadings [2].

For the ultrafiltration/ diafiltration step, vendors typically recommend using filters 3-5X tighter than the molecular weight of the target molecule. Therefore, for Fab molecules that are typically small in comparison to Mabs (approximately 10kD – 80kD), 1-10 kD molecular weight cut-offs tangential flow devices are commonly used for the ultrafiltration/ diafiltration (UF/DF) step. As a result, the permeate fluxes can be lower. For BsAbs, the typical UF/ DF filters range from 30-50kD MWCO.

Additionally, some molecules can be PEG-ylated to improve bioavailability, which leads to higher viscosities as concentration increases. Also, because of the interest in subcutaneous applications, the target molecules are being concentrated to higher concentrations. For these reasons, the influence of the type of screen (screens help create turbulence and promote mass transfer) in flat sheet devices is paramount and should be taken into account in small scale optimization studies (1). A final consideration for E. coli expressed Fab molecules is downstream endotoxin removal. This is often achieved through anionic membrane adsorbers or charged membrane filters [1].

For the final sterile filtration, asymmetric PES membranes offer higher fluxes and capacities than symmetric membranes- however both PES and PVDF membranes should be evaluated at this stage. Attention should be paid to the final sterile filtration step as well, especially with respect to product recovery.


4.0 Part 3 – Scale Up and Process Transfer Considerations

Factors to consider when scaling-up current antibody and antibody variant processes depend on the stage and goals for the project, including the molecule’s pre-clinical or clinical phase, speed to market, process economics, manufacturing and operation flexibility, expertise, facility infrastructure, and batch volumes. Pros and cons of these factors can be weighted to decide how to proceed with the scale up logistics. In some cases, companies may lean towards utilizing single-use, stainless steel, or a hybrid for the manufacturing process. Additional considerations include building or using an existing facility, or outsourcing manufacturing to take advantage of the PD experience and infrastructure from contract manufacturing organizations (CMO).

Single-use processes have the inherent “out of the box, and ready to use” benefits, easing implementation.   With single use processes, users benefit from a lower investment in the upfront capital in comparison to a fixed facility with stainless steel systems, specific infrastructure requirements, such as steam and CIP/SIP, are likely not needed, and validation is minimal and/or eliminated [4]. For a multi-product facility, and in cases where batch volumes may vary and be less than 2000L, the additional benefits of a single-use approach may come from the quicker turnaround times from batch to batch, lower risk of cross product contamination, flexible volume manufacturing, and overall economics and facility fit. All of these factors contribute to the delivery of a process with speed to market needs, improved economics, and process flexibility.

A stainless steel facility can be considered for late stage molecules, multiple and large campaigns, and batch volumes greater than ~2000L where single-use systems may be a limitation.   In this case, the facility and equipment implementation would have a larger capital cost and initial validation investment; however, the long-term utilization of these assets can bring a return of investment and pay for itself over time, depreciation of equipment is incorporated, and other factors of a long term and multiuse facility may make economics more feasible [4]. A manufacturing facility may also incorporate a hybrid of single-use and stainless steel infrastructure to accommodate all needs of the project’s stage and goals. CMOs are well equipped with both types of facilities and process expertise, which may be more appealing in cases where there is a facility throughput limitation and/or speed to market may require outsourcing.

In addition to specific facility needs, each unit operation has its own specific rules for scale-up. In some cases, linear scalability can be accomplished for some technologies such as filtration; however, system scale-up is sometimes overlooked and can be the cause for deviation or unexpected process performance [5]. Fluid dynamics, hold-up volumes, frictional losses, hardware requirements and yield recoveries are factors to strongly investigate prior to scaling up. Ultimately, thorough process transfer studies must be completed to ensure the process meets specifications.

It is important to consider hold-up volumes of not only the devices utilized in the process, but also of the system itself and the impact this has on overall process recoveries. In some cases systems are installed in cramped spaces, which may require device selection and physical attributes of the tubing/piping to include turns and differential in height. Along with hold-up volumes and system/piping design all of which contribute to frictional losses, fluid dynamics (viscosity, temperature, flowrate is another factor to help determine the system component requirements for each unit operation [5]. In addition, when scaling up, researching the type of hardware to be used at large scale is evident, however, at times, not given enough attention. For example, there are many devices out on the market that are fully encapsulated at small scale, but for equivalent larger scales these may require holders. Considerations of the large scale hardware systems must be addressed within the different unit operations. Some of these include physical attributes, automation and footprint. Finally, yet important, proper validation of the systems and process should be completed prior to scaling up or even manufacturing of clinical material.

In addition to specific facility and system needs, all of the unit operations share a common ground of considerations for implementation and tech transfers. One of these considerations is for companies to further investigate each molecule’s process operating conditions via Design of Experiments (DoE) or even a deeper dive into a Quality by Design (QbD) approach. Another consideration is to understand raw material and consumable lot-to-lot variability, and the processes batch to batch variability. These factors can provide a better understanding of each unit operation and the performance of the process as a whole, which can contribute to the robustness and possible higher degree/window of operation. In cases where these deeper approaches may not be feasible, an upfront investment of rationally defined safety factors can be incorporated for all unit operations to minimize the risk for process deviations [6].


5.0 Conclusions
Antibody variants such as Fabs, BsAb and fusion proteins are generating increased interest as the demand for target therapeutics with improved efficacy continues to grow. Compared to traditional MAB processes, these molecules present some developing and manufacturing challenges. Each of the steps in recovery and purification of these molecules must be optimized based on the process requirements and the molecule characteristics, ensuring robust, stable and scalable production processes.


January 9, 2015 | Claire Scanlan | Mireille Deschamps | Juan Castano | Ruta Waghmare, PhD | Corresponding Author Ruta Waghmare , PhD | ruta.waghmare@emdmillipore.com | doi: 10.14229/jadc.2015.1.9.001

Received: December 10, 2014 | Accepted January 7, 2014 | Published online January 9, 2014

Creative Commons License
This work is published by InPress Media Group, LLC (Downstream Processing Considerations for Antibody Variant Therapeutics by Claire Scanlan, Mireille Deschamps, Juan Castano, Ruta Waghmare, PhD) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.

Last Editorial Review: January 9, 2015

Add to Flipboard Magazine.


The post Downstream Processing Considerations for Antibody Variant Therapeutics appeared first on ADC Review.

Editorial: Utilization of Breakthrough Therapy Designations for Market Access

$
0
0

In the pharmaceutical marketplace, time-to-market is crucial, with companies seeking viable strategies to help hasten the review process in the United States. Manufacturers must meet the requirements of the Food and Drug Administration Safety and Innovation Act or FDASIA for short, signed into law on July 9, 2012, which expanded the FDA’s authorities and strengthened the agency’s ability to safeguard and advance public health. The Advancing Breakthrough Therapies for Patients Act was incorporated into a Title of FDASIA in order to expedite the review process of novel therapies that are showing very promising results in early phase clinical trials. [1]


According to a peer-reviewed article published in Clinical Cancer Research, a breakthrough therapy designation (BTD) must meet specific criteria. First, the disease in question must be life-threatening and highly debilitating. Also, it must not have a standard of care treatment, or the standard of care treatment has failed to show efficacious clinical improvement. In addition, the product in clinical development must demonstrate substantial improvement compared to a therapy in the same class, or be superior to the current standard of care. Finally, the product must show promising or superior outcomes in early stage trials.[2] If the product meets these criteria, it may be granted the BTD.


Fig 1.0 – Overview Granted of Breakthrough Therapy Requests.

1.0 Impact on Oncologic Market Access
Breakthrough Therapy Designations (BTDs) have captured the attention of many hematology and oncology drug manufacturers. According to EP Vantage, 141 BTDs have been requested, and 37 applications have been accepted by the FDA since 2012.[3] The Center for Drug Evaluation and Research or CDER reported that 42% of all granted BTDs fall into the disease categories of hematology and oncology, as of December 2013.[4] Clearly, pharmaceutical companies specializing in these disease areas are taking notice of this accelerated market access program.

While 37 applications have been accepted, only four products have had NDA approval. Of these four products, three are indicated for oncology use. Obinutuzumab (Gazyva®; Genentech) and ibrutinib (Imbruvica®; Pharmacyclics/Janssen Biotech) are both indicated for chronic lymphocytic lymphoma (CLL). In addition, ibrutinib is indicated for mantle cell lymphoma. As for the oncology market, BTDs represent the best option for expedited market access. On average, a product takes approximately 88 months to gain market approval. With a BTD, the approval process can take as little as 53 months to gain market access.[5] For example, if a BTD is granted to a drug in Phase II, this gives the product the potential to reach market three years faster when compared to a product without a BTD. This kind of accelerated market access will lead to a greater length of market exclusivity, and give manufacturers increased time for sales and marketing.


Fig 2.0 – For Serious and Life-threatening diseases, including cancer, the U.S. Food and Drug Administration (FDA) can grant specific designations to trial drugs that may help accelerate their time to approval. If the FDA grants accelerated approval, patients may receive a trial drug while ongoing Phase III studies confirm safety and efficacy. Source: FDA. Click on Image to enlarge

2.0 Pricing and Market Access Considerations for Oncology Products
The accelerated review of products with a BTD requires additional planning for pricing, reimbursement and market access (PR&MA). On most occasions, PR&MA planning is done during Phase III of product trials. With a BTD, a company must be ready to start this process during Phase II. According to Ram Subramanian, a pricing, strategy and marketing expert at Simon-Kucher & Partners, “Payers will find it more difficult to restrict access to a drug whose therapeutic value has been singled out by the FDA, and which may have already generated excitement among physicians and patients.[5]

In other words, an oncology product gaining a BTD can potentially ease payer scrutiny. Nevertheless, the company must still construct a compelling value proposition to present to these stakeholders.

An article written by Simon-Kucher & Partners published in OBR Oncology, identified three value drivers that will help develop a meaningful value proposition. Overall Survival (OS), Progression-Free Survival (PFS), and Safety Profile (SP) are the most compelling components of clinical results that payers want to see5. In the case of oncology products, OS and PFS data demonstrate efficacy to payers. Without this data, payers may be skeptical of the product and will require alternative and compelling evidence. A company may need to consider this challenge, and find ways to compensate and demonstrate value — in contrast to the lack of data generated by a product undergoing an accelerated review process.


3.0 Payer Uncertainty with Breakthrough Therapy
A certain amount of skepticism and uncertainty exists surrounding how BTD products undergoing clinical trials will be priced upon market approval. At a conference sponsored by Friends of Cancer Research in September 2013, an Aetna representative stated that payers are “nervous” about these products.[6] Michael Kolodziej, national medical director of Oncology Solutions, stated, “We recognize unmet need. We recognize the therapies that are being thrown at these diseases are not very effective. We would like much more effective treatment. We are not fools, however, and there are no new drugs to come to market that are cheaper than old drugs… So, we have to find the way to get the right drug to the right patient.”[6]

In addition, reimbursement surrounding products with BTD is uncertain. Primarily, payers are concerned about the prices that these new products will bring to the market.


4.0 Conclusion
For manufacturers, BTDs could bring potentially significant benefits to their oncology pipelines. With market approval, a product’s market access will significantly increase which could cause significant challenges when gaining support from various market access stakeholders. Payers in the oncology market, as well as all other therapeutic areas, are generally concerned about the prices that these novel therapies will demand.


March 31, 2015 | Corresponding Author Sophie Murdoch | doi: 10.14229/jadc.2015.3.31.001

Received: March 30, 2015  | Published online March 31, 2015 | This submitted editorial has not been peer reviewed.

Creative Commons License
This work is published by InPress Media Group, LLC (Editorial: Utilization of Breakthrough Therapy Designations for Market Access) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.

Last Editorial Review: March 31, 2015

Add to Flipboard Magazine.


The post Editorial: Utilization of Breakthrough Therapy Designations for Market Access appeared first on ADC Review.

Editorial: New Learning Makes ADC Development Less Empirical

$
0
0

A few decades ago, the pharmaceutical industry developed and commercialized racemic small molecule drugs. Despite the success achieved with racemic drugs, the risk associated with the use of an enantiomer not contributing to the therapeutic effect (but to side-effects, like in the thalidomide tragedy), combined with scientific and technical advances allowing manufacturing of enantiomerically pure drugs, have made this approach obsolete.[1]

Currently, a racemic small molecule drug is very unlikely to get FDA approval regardless of its therapeutic properties. It is equally unlikely that a pharmaceutical company would even try to develop such a therapeutic.[2]

Will the ADC community experience the same revolution?

Heterogeneous ADCs
The two commercial antibody-drug-conjugates, brentuximab vedotin (Adcetris®; Seattle Genetics) and ado-trastuzumab emtansine (Kadcyla®; Genentech/Roche), as well as a majority of clinical ADCs are a heterogeneous mixture of products that differ in the sites and stoichiometry of conjugation. Although they can offer improved treatments to some patient populations, heterogeneous ADCs suffer from a few drawbacks:

  • The heterogeneity complicates manufacturing: batch to batch reproducibility is more difficult to achieve even if it can be mastered given the right equipment and know-how;
  • More complex analytical testing is required;
  • The individual compounds present in the ADC mixture do not have the same physico-chemical (e.g. aggregation) and pharmaceutical (pharmacokinetic profile and toxicity) properties, resulting in a suboptimal therapeutic treatment;
  • Higher DAR species present in the mixture are associated with faster clearance and increased toxicity.

Site-directed conjugation
The emergence of site-directed conjugation techniques gives access to ADCs with improved homogeneity. [3] Site-directed conjugation will not only allow manufacturing of more homogeneous ADCs with optimal drug loading, it will also (or maybe first) facilitate drug development. Controlling the drug loading and conjugation site allows a comparison between homogeneous candidates, providing new learning and making ADC development less empirical.

The author believes that site-directed conjugation will ultimately result in improved standards of care and become the new gold standard for bioconjugation. This provides opportunities for patients as well as for the pharmaceutical industry.


April 10, 2015 | Corresponding Author Laurent Ducry, Ph.D | doi: 10.14229/jadc.2015.4.10.001

Received: March 29, 2015  | Published online April 10, 2015 | This submitted editorial has not been peer reviewed.

Disclosures: Laurent Ducry, Ph.D is an employee of Lonza, one of the founding partners of ADC Review / Journal of Antibody-drug Conjugates.  He is also a member of the journal’s Editorial Advisory Board.

Creative Commons License
This work is published by InPress Media Group, LLC (Editorial: New Learning Makes ADC Development Less Empirical) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.

Last Editorial Review: April 10, 2015

Add to Flipboard Magazine.


The post Editorial: New Learning Makes ADC Development Less Empirical appeared first on ADC Review.

Big Data in Science: Which Business Model is Suitable?

$
0
0

1.o Abstract
The first aim of this paper is to define which Big Data business model (in science-based activity) will be able to provide IT services to biotechnology and life sciences companies, as well as research laboratories. The second aim of the paper is to define a methodology to market a still widely unknown service for these companies and laboratories.

Since 2011 Big Data was identified as an emerging market considering the availability of huge amount of commercial and marketing data. Life sciences are known also to generate a deluge of data as an untapped source of information. The main approach was to identify what is specific for Life Sciences and what managers in Life Sciences companies and laboratories may expect from a Big Data activity-based company. Life Sciences are used to deal with large amount of data, most of them in well known structured formats. The question was to know what additional actionable information could be provided by Big Data technologies and analysis. We tried also to evaluate the needs and expectations of biotechnology and life sciences companies and research laboratories regarding data search and analysis using a survey on-line addressing life sciences companies and laboratories contacts. Most of responders require anonymised and secure data analysis and expect actionable information to launch new biotech product or to confirm a strategy.


 

2.o Introduction
Big data is a concept making the buzz since 2011 although its origin remains uncertain. Diebold (2012) originated the term in a lunch discussion at Silicon Graphics Inc. (SGI) in the mid-1990s. John Mashey, the renowned Chief computer scientist of SGI was supposed to be the first to spread the term of Big Data during a conference in 1998. However, IBM can be partly attributed the present hype around Big Data by popularizing the concept and investing in this new analytics market.

Intuitively, size is the first characteristic that can define Big Data. However, new features emerged recently to define more precisely the concept. Laney (2001) proposed a three dimensions common framework of Big Data: Volume, Variety and Velocity known as the Three V’s. Gartner, Inc. in its IT Glossary and TechAmerica Foundation uses similar definitions (Gandomi and Haider, 2015).

  • Volume refers to the scale of data depending on the industry. Big Data sizes are reported in terabytes (1012) (TB), petabytes (1015) (PB) and even zettabytes (1021) (ZB). IDC projects that the digital universe will reach 40 zettabytes by 2040 (EMC study, 2014). For example, Facebook processes more than 500 TB daily, among them 300 millions photos uploaded and 2,7 billion “likes” (Jay Parikh, VP Facebook infrastructure engineering, 2012).
  • Variety refers to the structural heterogeneity of the data. Structured data are typically tabular data found in spreadsheets and relational databases. Unstructured data are text, images, audios and videos. Semi-structured data can be illustrated by XML (eXtensible Markup Language) documents found to exchange documents on the web, in publishing industry and even in Microsoft Word documents.
  • Velocity refers to the rate at which data is generated and the speed at which it should be analyzed and acted upon. For example, Wal-Mart processes more than one million transactions per hour (Cukier, 2010) and 70,000 queries are executed and 105 TB data scanned via Hive per day on Facebook. Moreover, digital devices such as smartphones and sensors are generating high-frequency data (geo-localization, demographics, buying patterns, physiological data).

Main IT companies have proposed three more dimensions:

  • Veracity (the fourth V pushed by IBM) refers to the unreliability inherent to some source of data. For instance, customer opinion or feelings in social media are by nature uncertain.
  • Variability and complexity dimensions were suggested by SAS Inc. and refer to the variation in the data flow rates. Periodic peaks and troughs due to server access or the number of requests to a source can alter Big Data velocity. The huge number of sources for Big Data generates the complexity of Big Data that needs to be connected, matched, cleaned and transformed.
  • Value is a dimension presented by Oracle to distinguish low value data from high value data (i.e. analyzed data). In a Datameer Inc. White paper (2013), Groschupf et al. reported that the main objectives for companies to implement Big Data are: increase revenue, decrease cost and increase productivity, which is consistent with the fact to create value out of Big Data. The main issue of Big Data compare to classical models of reporting and monitoring is the predictive added value of Big Data (Brasseur, 2013).

Table 1. Some of the open source technologies used in Big Data projects.
Table 1. Some of the open source technologies used in Big Data projects [Click to Enlarge].
Big Data in itself is worthless and requires data analysis to retrieve or acquire intelligence from the data and help in decision-making. Big Data processes or pipeline of extracting insights can be divided in two sub-processes: data management and data analytics (Gandomi and Haider, 2015). This process can be defined as a Business Intelligence system (BI) according to Wang and Liu (2009). A Business Intelligence system should have the basic features as followed (Marín-Ortega et al, 2014):

  • Data Management: including data extraction, data cleaning, data integration, efficient storage and maintenance of large amounts of data
  • Data Analysis: including information queries, report generation, and data representation functions
  • Knowledge discovery: extracting useful information (knowledge or insights) from rapidly growing volumes of digital data in databases.

Most of the authors use to represent the Big Data analysis pipeline from a computer-driven process perspective. We suggest highlighting the role of data visualization as a result of data management and as a tool for data analysis (Fig 1.). Data visualization objective is to present information clearly and efficiently to viewers using developed graphics. Data visualization is a recent field presented as one of the steps of data science (see below) (Friedman, 2008) and a tool to communicate information from complex data sets. Fernanda B. Viégas, a Brazilian and MIT Media Lab -originated computer scientist founded with Martin Wattenberg, Google “Big Picture”, a data visualization research group. They suggested that an “ideal visualization should not only communicate clearly, but stimulate viewer engagement and attention” (Viégas and Wattenberg, 2011).

Fig 1. Big Data analysis pipeline showing the role of Data visualization and human collaboration (from Gandomi and Haider, 2015).
Fig 1. Big Data analysis pipeline showing the role of Data visualization and human collaboration (from Gandomi and Haider, 2015). [Click to Enlarge]
The role of human collaboration in refining the Big Data processes is also essential because each step requires a “human” decision (Jagadish et al, 2014). One challenge of the Big Data is to structure data that can be reusable. A human interference is needed to structure the data. Data analytics techniques support the processes handle by Data Scientists (Table 2.)

3.0 And Big Data in science?
There might be some confusion between the terms Big Data Science, Big data in Science, specifically in Biology or in Life Sciences. Big Data starts with data characteristics (up) whereas Big Data Science starts with data use (down) (Jagadish, 2015). The National Consortium for Data Science (NCDS), an industry and academic partnership, Chapel Hill in 2013 has defined data science as “the systematic study of digital data using scientific techniques of observation, theory and development, systematic analysis, hypothesis testing and rigorous validation”. Data Scientists are people specialized data analytics. For Big Data in Biology or Health, Data Scientists are bioinformatics scientists.

Table 2. Data analytics techniques subset relevant for Big Data (from Gandomi and Haider, 2015). [Click to Enlarge]
Table 2. Data analytics techniques subset relevant for Big Data (from Gandomi and Haider, 2015). [Click to Enlarge]
Big Data in Biology or in Life Sciences or in Health refers to Big Data computing in a specific field or industry so called Life Sciences (if we consider health as a Life science). In the past, Biologist used the term of bioinformatics to describe the methods and techniques to manage the data generated by research in Biology and medicine. The bioinformatics term although mainly dedicated to genomic data has fallen into disuse since the concept of Big Data emerges as a buzzword in 2011 (Fig 2.). This is the reason why we propose to use in this paper the term of Big Data in Biology or in Life Sciences to discuss specific applications and use of Big Data in this field.

The data generated by the Life Sciences are different from the data available in other fields or industries. Most of the biological data are generated by academics under the English language neologism of omics, which refers to a field in biology ending in –omics, such as genomics, proteomics, or metabolomics. An overview of biological data produced and availability is presented in Table 2. Data formats in Life Sciences (i.e. PDB format for protein, DICOM for medical imaging) are different and require specific software tools to manage them (i.e. FAST or BLAST software package to compare nucleic acids or protein sequences). Big Data management in Life Sciences will be able to combine classical data management (Table 1) and data analysis listed in Table 2 with specific data format management. For example, biology scientists use link prediction techniques to reveal links or associations in biological networks (i.e. cellular signal transduction pathways) in order to reduce cost of additional expensive experiments (Navlakha et al, 2012).

Fig 2. Comparison of Big Data and Bioinformatics term requests using Google Trends since 2005. [Click to Enlarge]
Fig 2. Comparison of Big Data and Bioinformatics term requests using Google Trends since 2005. [Click to Enlarge]
Interestingly, a search on Google Trends showed the terms “Big Data in Life Sciences” and “Big Data in Biology” generate insufficient data volume to display a result. When typing on Google Search “Big Data in Life Sciences” that most of the links listed are related to Big Data in Health. IBM, Big Data pioneer, for instance opened a very well documented and smart Big Data & Analytics Hub. The reason is that the Big Data revolution was expected to accelerate value and innovation and therefore to reduce cost (Groves et al, 2013). This McKinsey&Company whitepaper tried to demonstrate that US Healthcare costs could be reduced by $300 billion to $450 billion.

4.o Specific issues of Big Data

4.1 Data governance
In a survey, PwC, a consultancy company, showed that 62% of Life Sciences executives changed the way they approach big decision making as a result of big data or analytics (2014). According to PwC again, 81% of the companies did not define any strategic data governance, and 54% of the respondents think that the top management are not concerned with the quality of data.

Table 3.Types data used in Life Sciences relevant for Big Data in Biology and Health [Click to Enlarge]
Table 3.Types data used in Life Sciences relevant for Big Data in Biology and Health [Click to Enlarge]
In Life Sciences, so far, most of the data were structured and available in dedicated databanks such as GenBank (genomics) or ExPaSy (proteomics) where data are recorded in specific and operable formats. Nevertheless, still a large amount of data is widely unstructured or deindexed and then unavailable for direct operation by users. In many countries, there are initiatives to index the Life Sciences data and think about the life cycle of data (i.e. RepliBio, a collaborative Brittany project framed by Biogenouest platforms).

4.2 Personal data protection
The main interest of Big Data is to collect any kind of data, most of them unstructured from the deeper and hidden Internet. Among these collected data, there is a lot a personal data (age, sex, hobbies, jobs,… ), which the big Internet players try to capture, use (marketing) and monetize (i.e. Google, Facebook). Confidentiality policies and General conditions of use proposed by these major players are published and accepted by a large majority of social networks users, turning a blind eye to the fact their personal data can be used, or sold to any interested operator. In Europe, users have to be informed about the potential use of their data, and the “right to be forgotten” on the Internet was finally reluctantly accepted by Google Inc. in 2014 under the French legal system (Court’s order, Dec. 19th, 2014) and European Court of Justice pressure (Judgment of May 13th, 2014). In 2012, the European Commission committed in a major reform of the EU legal framework on the protection of personal data. In the US, there is no single, comprehensive federal law regulating the collection and use of personal data.

Personal health data are a major concern. Medical and patient information are protected in Europe even these are useful data to understand disease and physiological processes. For instance, in Framework Program projects financed by the European Commission (i.e. FP7 or H2020), the patient data are anonymized before use in research work packages.

4.3 Data ownership
In most countries, data ownership has no specific legal status. The data producer is not strictly speaking the owner of the data. A data is a piece of information, and information is free (Frochot, 2011). Only the intellectual creation (i.e. copyright) of the data or the data collection method (structured databases) can be protected.

Table 4. Examples of Big data applications in Life Sciences.[Click to Enlarge]
Table 4. Examples of Big data applications in Life Sciences.[Click to Enlarge]
5.0 How can we market Big Data services?
Since Big Data as a service is only known from 2011, we can consider intuitively there is a lack of marketing, image and knowledge in this issue.

In order to the quantify identify the needs of managers in Life Sciences companies towards IT and Big Data services we launched a survey among 2,565 contacts provided by CBB CapBiotek, the Brittany biotechnology organization. Data analysis (75%) and data visualization (50%) are the most considered important services chosen in order to launch new products (50%) and proof-of-concept (75%). Even most of the responders agree for secure data procedures and treatment (78%), most of them have no ideas of what is data monetization. Interestingly, most of them (62%) are aware that Big Data will provide new information from a mix of sources.

In sum, managers are ready to purchase Big Data services if data collection is anonymized (in health) are data analysis and treatment are secure. This last could be a key success factor for Big Data in science activity-based company.

6.0 What will the business model and the added value for a Big Data services-based activity?
According to Wang (2012), there are three main business models approaches for Big Data. The first one focuses on using data to create differentiated offerings – information based differentiation – (i.e. Google AdSense advertising system), the second one use brokering that augments the value of information – information based brokering – (i.e. Bloomberg delivering additional analysis insights) and third one involves content and information providers and brokers who creates delivery networks enabling the monetization of data – information based delivery networks. The data monetization issue is considered as the fourth step (about five) of the “Big data Business Model Maturity” chart proposed by Schmarzo (2012). Organizations try to sell their data with analytics to other organizations, or create “intelligent” products, or transform their customer relationship by levering actionable insights (Schmarzo, 2013).

In Life Sciences, scientists and researchers, and biotech and pharma managers are quite aware about the amount of data generated by biology but not about the tools available to handle and manage the data. They accustomed to use and handle omics data. But they have little idea of what kind of relevant information they can expect from Big Data Services.

First of all, a Big Data -based activity should demonstrate potential applications (Table 4) and benefits for customers segments. Managers of biotechnologies and pharmas are expecting actionable information that could help them to make decision about to continue to develop a therapeutic molecule through long and expensive clinical trials for instance. In health domain, big pharma managers and most of the governments are probably overestimating the benefits of Big Data in reducing public health costs. Big Data services will have to provide specific relevant, actionable and confidential information for Life Sciences managers (Fig 3.)

Fig 3. Proposed business model for a Big Data in Life Sciences services activity from a life sciences organization perspective. [Click to Enlarge]
Fig 3. Proposed business model for a Big Data in Life Sciences services activity from a life sciences organization perspective. [Click to Enlarge]
Human genome sequencing took about 10 years (Dubelco, 1986) (declared complete in April 2003), nowadays it will take one week using NGS (New Generation Sequencing) improving by 10,000 times the sequencing costs and 100 times according to Moore’s Law prediction (Delort, 2012). However, it will take decades to extract all the valued and actionable information from this genome (i.e. link between genes and diseases, link between non-coding sequence and diseases).

7.0 Conclusion
According to the so-called Gartner hype curve or cycle for 2014 (Rivera, 2014), among emerging IT technologies, Big Data is already on the downslope of the peak of expectations, and will reach the trough of disillusionment soon. Surprisingly, Data science is on the upslope of the peak of expectations. It means, first, so far Big Data was mainly a buzz word with no relevant content, and second, the business model of Big Data based activity will require human expertise to produce business intelligence.

Fig 4. Gartner hype cycle for Data Science Big Data inspired from Ribera (2014) [Click to Enlarge]
Fig 4. Gartner hype cycle for Data Science
Big Data inspired from Ribera (2014) [Click to Enlarge]
However, the compound annual growth for Big Data technology and services is expected to be about 26,24% and will reach $41,52 billion in 2018 (IDC, 2014).

Moreover Forbes talked about a $125 billion Big Data Analytics Market in 2015 (Press 2014). However, there is no market shares data available for Big Data in Life Sciences. Managers of Biotechnology and pharmas, as well as research laboratories directors, are looking for more actionable information compare to what are used to get from the omics data studies. Although new computing technologies (i.e. parallel computing) and cloud computing will allow additional data acquisition and treatment, dual competent data scientist will be needed to refine data information processes and results. Secure, relevant, appropriate and valued information generated by BIG Data technologies and services will be a guarantee for a sustainable business model.

The Big Data area where the meeting expectations are the greatest is health because of the public health issues and challenges for most of the countries. However, Big Data in health is constrained by the lack of data management and governance in pharmas and by the personal data protection issue.

Nevertheless because of the large amount of data available in Life Sciences, either structured or unstructured, Big Data technologies together with data science expertise services (bioinformaticians), there is no doubt that a Big Data-based activity is first sustainable and second will be able to produce valuable information for biotechnologies and pharmas in order to improve or accelerate their development.

In the end, Big Data in Life Sciences will benefit from the practice of structured data but will develop only with the promise that Big Data services will be the new discovery tools as it was for researchers.


Acknowledgement 
We would like to acknowledge Gilbert BLANCHARD, Director of CBB Capbiotek for his help and support during the study. Special thanks to Lilybelle MALAVÉ for her support.



« Where is the wisdom we have lost in knowledge?

Where is the knowledge we have lost in information? »

T.S. Eliot


September 10, 2015 | Corresponding Author Guy Mordret, Ph.D | doi: 10.14229/jadc.2015.10.10.001

Received: July 29, 2015  | Published online September 10, 2015

Disclosures: Guy Mordret, Ph.D is an employee of Anaximandre Ltd., Parc d’Innovation de Mescoat, 29800 Landerneau, France.

Creative Commons License
This work is published by InPress Media Group, LLC (Editorial: New Learning Makes ADC Development Less Empirical) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.

Last Editorial Review: September 10, 2015

Add to Flipboard Magazine.


The post Big Data in Science: Which Business Model is Suitable? appeared first on ADC Review.


The Challenge of cGMP in the Manufacturing of Antibody-drug Conjugates

$
0
0

1.0 Introduction
Antibody-drug Conjugates or ADCs are a new generation of highly hazardous / highly toxic pharmaceutical products used, among other things, in the targeted treatment of cancer. Most of these ADCs require an Occupational Exposure Limit or OEL below 50 nanograms/m3.

The manufacture of Antibody-drug Conjugates or ADCs offers a new challenge, particularly in aseptic production. In order to meet Current Good Manufacturing Practice or cGMP requirements the aseptic process must be run in positive pressure, and the operating personnel must also be protected actively from the substance. Isolators have been used successfully for many years for product protection in aseptic manufacture, and they are now being called on to provide active personal protection as well. Is this a contradiction in terms? At first glance the answer is yes. In aseptic manufacture, isolators are operated in positive pressure in order to protect the product. Containment, however, calls for negative pressure in the isolator to protect personnel by preventing the hazardous substance from escaping. Special seals on the isolator and a sophisticated pressure-cascade concept with active mouseholes make it possible to protect both the product and personnel.

Introduction: Where do these Occupational Exposure Limits or OELs come from?

2.0 How are OELs calculated?
Occupational Exposure Limits or OELs are calculated on the basis of Acceptable Daily Exposure / Permitted Daily Exposure or ADE/PDE. ADE is used specifically in the USA, while PDE is a European term from the EMA (European Medicines Agency) that has been mandatory in the manufacture of pharmaceutical products since 2015, as outlined in the EU GMP Guideline, Chapter 5, Article “Guideline on setting health based exposure limits for use in risk identification in the manufacture of different medicinal products in shared facilities“.

One of the sections of this guideline is entitled ‘Data requirements for hazard identification.’

Worker ADE/PDE in mg/day = NOEL (mg/day)

UFc x BA

NOEL = No Observed Effect Level
UFc = Cumulative Uncertainty Factor for the worker
BA = Bioavailability by inhalation
ADE = Acceptable Daily Exposure in mg/day
PDE = Permitted Daily Exposure in mg/day

Figure 1.0: Containment Pyramid. The Containment Pyramid is a development of Richard Denk and a globally used standard.

The relationship between the ADE/PDE and the OEL can also be seen in the Containment Pyramid (Figure 1.0). Alongside the OEL, the ADE/PDE is also used to calculate threshold values for cleaning residues and for product carry-over from one substance to the next. The threshold values calculated using PDE also replace the 10 ppm and the 1/1000 of the daily dose criteria previously applied in the European Union.

3.0 Adequate containment for complying with the required threshold value of 50 nanograms/m3
What do we mean when we refer to a threshold value of 50 nanograms/m3? Let us take a particle with a size of 0.5 μm. This particle size is relevant in the classification of clean rooms. Cleanroom class A (ISO class 5) allows for 3,520 particles (>= 0.5 μm) per m3. The weight of a 0.5 μm particle with a product bulk density of 0.8 kg/l is approximately 100 nanograms. With a required OEL of 50 nanograms/m3, therefore the particle is said to have a size of 250 nanometers or 0.25 μm. A series of important measures must be taken in order to achieve control over this 250 nanometer- particle with regard to cleaning, and to complying with the OEL.

4.0 Measures for achieving a threshold value of 50 nanograms/m3
The critical stages in the manufacturing process of ADCs are producing the toxin ‘payload’ – a highly active substance that requires a defined OEL – and its linker, and adding the payload to the conjugate. Following sterile filtration, filling the product into vials and freeze-drying are further critical stages.

Which technologies exist for meeting the requirements of extremely low OELs?
In the last decade, a whole host of technologies have been developed with a view to transferring highly active or highly hazardous products safely into or out of a manufacturing process. When it comes to safely remaining below an OEL of 50 nanograms/m3, however, isolator technology is used in most cases. Originally developed for the nuclear industry, isolators have also been in use for many years in pharmaceutical manufacturing for highly active or highly hazardous substances. Isolator technology involves the use of a contained space, to which access is obtained by means of gloves attached to a glass door (Photo 4.0: ADC aseptic fill and finish). The interior of the isolator is operated in negative or positive pressure, depending on the requirement. For straightforward personal protection, the isolator is operated in negative pressure in order to prevent the hazardous substance from escaping. In aseptic manufacture, product protection has top priority and therefore requires the isolator to be operated in positive pressure.

Prior to the sterile filtration of the ADC, personal protection has priority, which is why straightforward personal protection isolators operated in negative pressure are used up until this stage. Isolators also vary in ways that can make them suitable for substances below 50 nanograms/m3.

The following requirements apply to an OEL of 50 nanograms/m3.

  • A suitable airlock system for inserting the highly active substance into the isolator, as well as for removing material and waste
  • High-performance filter technology
  • Hygiene design for cleaning on product changeover
  • Glove testing
  • Transfer of payload into conjugate container

5.0 Airlock systems
The most common design mistakes occur with the airlock system into and out of the isolator, the filter technology, the gloves and the transfer of the substance into the conjugate vessel.
The airlock system is needed to insert the highly hazardous pharmaceutical substance into the isolator, as well as to remove empty containers or product residues from the isolator. There are various systems available for carrying out this transfer.

Possible airlock systems:

  • Transfer airlocks with locked doors between the pre-chamber and the main chamber of the isolator
  • Endless liner technologies
  • Rapid transfer ports (RTPs)

These systems all have weak points of varying significance that could cause a possible containment breach. Furthermore, these systems often fail to meet OELs below 50 nanograms/m3. The form of containment necessary to achieve an OEL of 50 nanograms/m3 comprises a combination of two different barrier systems – e.g. a Rapid Transfer Port (RTP) attached to an airlock on the isolator, and a locked door from the airlock into the isolator, with the pressure remaining higher in the airlock than in the main chamber of the isolator. Another dual barrier system is an airlock with an endless liner system between the material airlock and the main chamber.

6.0 Filter technology
The filter technology used together with the isolator is another critical area. Given that personal protection isolators are operated in negative pressure, suitable filter technologies are also required at the air inlet and outlet points of the isolator.

Possible filter systems:

  • Bag-in/bag-out filter
  • Push-push filter cartridge
  • Filter cartridge (FiPa)

All of these filter systems are suitable for preventing highly hazardous substances from escaping from the isolator. Some filter systems present a GMP risk, however, as bag-in/bag-out filters and push-push filter cartridges can cause recontamination of the new filter by the contaminated one during filter change, for example. Particles can then become detached from the new filter and enter the area containing the next product, and this is critical because this recontamination is often not discovered. The filter cartridge (FiPa) avoids this.

Foto 1.0: FiPa open
Photo 1.0: FiPa open

FiPa filter technology was developed as a filter isolator. The FiPa is designed as a closed system that is attached to the isolator, with the opening on the FiPa towards the interior of the isolator operated from the outside. The dust-laden air can enter the FiPa, and the dust is deposited on the filter. During product changeover, the FiPa is sealed again from the outside. The isolator can now be cleaned, and after cleaning the FiPa is removed with no risk to the operator or to the next product to be manufactured.

In their paper “Safe Change Filter Systems for Containments in the Pharmaceutical Industry,” published in Die Pharmazeutische Industrie (September 2011), Frank Lehmann and Jörg Lümkemann explored the various issues of contamination-free filter change. [1]

Hygiene design and glove testing 
The concept of hygiene design refers to the cleanability of the interior surfaces of the isolator or of devices built into the isolator. As mentioned above, cleanability is hugely important as it represents the greatest risk of cross-contamination between two consecutively manufactured substances from a GMP perspective. Particular attention must therefore be paid to the seals on the front glass panels when it comes to designing an isolator. Static seals are often used and thus also the most problematic because they wear out over time and can allow dust to be deposited and penetrate critical areas. This process of wear can also affect the airtightness of the isolator. Following cleaning in particular, opening the glass panel can cause the product residues deposited in the seals to become detached and escape from the isolator. Inflatable seals are a better option in terms of hygienic design. With an optimal design, these seals allow for highly accurate sealing of the glass panel, and the seal and its functionality can be tested using validated measuring equipment.

Glove testing.
Photo 2.0: Glove testing.

Another important point is the design of the seals for gloves, as well as glove testing. When it comes to attaching the gloves there are also various possibilities for ensuring that the necessary level of containment is achieved. Most of these options make use of a double o-ring groove, which is needed to ensure a closed changeover in the event of a damaged glove.

From a containment perspective these o-ring attachments are a weak point, as containment cannot be achieved if the gloves are fixed incorrectly. It is also important to prevent the highly active or highly hazardous substance from accessing and becoming deposited around the o-rings, as this area is critical when it comes to cleaning. An ideal solution is an additional seal on the o-ring and on the sleeve of the glove, in order to prevent the substance from reaching the o-ring.

Glove testing is carried out to complete the safety check of this area, and involves examining the gloves for small tears or pinholes. The gloves are used to work inside the isolator, and are therefore exposed to risk of damage. This damage inspection should be carried out on a regular basis.

In their paper “How Risky are Pinholes in Gloves? A Rational Appeal for the Integrity of Gloves for Isolators,” published in PDA Journal of Pharmaceutical Science and Technology (2011), Angela Gessler, Alexander Staerk, Volker Sigwarth, and Claude Moirandat, Ph.D, describe different glove integrity test procedures and their ability to detect leaking gloves as well as results from extensive microbiological tests performed to give more evidence and crosscorrelation to physical testing.[2]

8.0 Transfer to the conjugate vessel
If the toxin/payload is in powder form, transfer to the next process via a powder transfer system should be avoided. The transfer systems currently available are not completely adequate for this step, and it is more advisable to mix the powdered substance with a suitable liquid within the isolator. Once the highly hazardous substance has been dissolved in liquid, it should be transferred to the conjugate container either via a peristaltic (tube) pump or via a “AT Connect” connector.

The AT Connect
Photo 3.0: The AT Connect

Both of these options involve single-use systems that facilitate safe transfer. With the peristaltic pump it is important to ensure that the product hoses connecting the conjugate container are cleaned before being removed. Here it is important that the hose connectors are designed in such a way that spillage is prevented when the hoses are disconnected. The “AT Connect” system makes this possible thanks to a special connecting and disconnecting process, and was developed to transfer sterile liquids from a container into an isolator for sterile filling into vials or syringes.

The same principle can also be applied to the closed transfer of a liquid from an isolator to the conjugate vessel. The passive adapter of the “AT Connect” system is attached to the conjugate vessel. The active part of the “AT Connect” adapter is located in the isolator, and the passive part is connected from the conjugate vessel to the active part of the isolator and locked in place. Once it has been locked, the active part can be opened and connected to the liquid container in the isolator, to allow for safe, closed transfer.

9.0 Aseptic fill and finish
Once the ADC has been sterile filtered, the fill and finish process can take place. It consists of the following critical steps:

  • ADC transfer to the vial-filling area
  • Vial-filling
  • Freeze-drying (lyophilization)
  • Inspection
  • Packaging

The entire manufacturing process from vial-filling to the freeze-dried pharmaceutical product takes place under cleanroom class ISO 5 conditions. Given that the ADC product is a highly hazardous substance, it is also recommended that these manufacturing steps be handled using isolator technology. Isolator technologies are widely used in aseptic manufacture, and have the benefit that operating personnel have no direct access to critical aseptic areas.

Based on the Fractional Negative method of determining the D-values of Biological Indicators (BIs), contained in the ISO 11138-1 and EN 866-3 standards, Sigwarth and Moirandat describe the requirements regarding aseptic manufacture in isolators and discuss a complete and systematic method that enables the parameters for each cycle phase to be determined and verified as well as the effectiveness of the process to be quantified. Their method also enables differences in bacterial reduction between positions which can be effectively decontaminated and “worst case” positions to be quantified. Sigwarth and Moirandat further describe how these quantified results can be used to individually adjust a process to specific overall bacterial reduction requirements. [3]  

In another paper, Patrick Vanhecke together with his colleagues Sigwarth and Moirandat present major criteria for the effective use of fumigation with emphasis on a new H2O2 procedure which focuses on a proper and simplified validation of a process using standardized biological indicators with defined concentrations.  The new concept for validation of the sanitization procedure overcomes the problems associated with conventional surface disinfection validation. It allows for considerable more safety at greatly reduced cost and work.[4]

In accordance with GMP guidelines on aseptic manufacture, isolators are operated in positive pressure in order to protect the product. When it comes to protecting personnel, however, the isolators should be in negative pressure in order to prevent the active substance from escaping. But how can these GMP and personal protection requirements fit together?

ADC aseptic fill and finish
Photo 4.0: ADC aseptic fill and finish

This double safety level can be achieved by incorporating additional safety systems. These safety systems are essentially similar to those used in the personal protection isolators described in this article, namely special seals on the glass panels and regular inspection of these seals, FiPa technology to prevent the highly hazardous substance from reaching the air circulation channels and an absolute hygiene designed Isolator.

It is also necessary to prevent the substance from being spread through the isolator should a vial happen to break, and this is achieved by means of various pressure cascades. The more critical the area, the lower the pressure to the other areas. Targeted air flows to the FiPa filters also reduce the spread of the substance, particularly during vial-filling and the unloading of the freeze-dryer. These measures enable OELs below 10 nanograms/m3 to be achieved, as verified on the basis of the ISPE Good Practice Guide “Assessing the Particulate Containment Performance of Pharmaceutical Equipment.” [5] 

Aiming to define current good practices, the second edition of the ISPE Good Practice Guide provides information designed aid organizations in benchmarking their practices and improve on them. The guide has been updated to address a broader selection of containment technologies and processing equipment and provides technical guidance and consistent methodologies for evaluating the particulate containment performance (particulate emissions) of pharmaceutical equipment and systems.[5]

10.0 Packaging
Following the freeze-drying process, the dried powder is contained in sealed vials. These vials are washed before they leave the isolator in order to prevent any contamination. The risk of a vial being broken during inspection and packaging remains, however.

Containment packaging machine
Photo 5.0: Containment packaging machine

The inspection and packaging of the vials must for this reason also be protected using suitable isolator technology in the critical areas.

11.0 Summary
ADCs are a new generation of highly active and extremely hazardous substances in the pharmaceutical industry that call for a new kind of containment for their manufacture. While the quantities manufactured in the initial phase of development are low, there is a significant risk of coming into contact with the product if adequate protective measures are not taken. Isolator technologies are suitable for this purpose, but require innovative solutions and safety precautions when it comes to handling ADCs safely.


April 4, 2016 | Corresponding Author Richard Denk | doi: 10.14229/jadc.2016.04.02.001

Received: February 28, 2016 (German) / March 20, 2016 (English)  | Published online April 4, 2016 | This article has been peer reviewed by an independent editorial review board.

Featured Image: Antibody-drug Conjugates (ADC) Aseptic fill and finish. Courtesy: © Richard Denk/SKAN AG. Used with Permission. Other Image and illustrations Courtesy: © Richard Denk/SKAN AG. Used with Permission.

Disclosures: Richard Denk is Head Sales Containment at SKAN AG and ISPE chair of the COP Containment DACH.

Creative Commons License
This work is published by InPress Media Group, LLC (The Challenge of cGMP in the Manufacturing of Antibody-drug Conjugates) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Last Editorial Review: April 2, 2016

Add to Flipboard Magazine.


The post The Challenge of cGMP in the Manufacturing of Antibody-drug Conjugates appeared first on ADC Review.

When Data Integrity Takes Center Stage

$
0
0

1.0 Abstract
Data integrity has become an important issue. To comply with regulations, companies need to optimize data integrity and the underlying strategies involved in compliance and accountability. The following article outlines the basic regulatory expectations surrounding data integrity, as well as the strategic, multi-tiered approach needed to establish a result-based system of accountability and develop a culture of quality, ethics, and compliance.

Alongside risk assessments, electronic records, and outsourcing, data integrity has become increasingly important to regulatory agencies focusing on critical aspects of pharmaceutical quality management.

This became clear with the publication of the Good Manufacturing Practice (GMP) Data Integrity Definitions and Guidance for Industry by the UK’s Medicines and Healthcare Products Regulatory Agency (MHRA) [1]. The guidance confirmed that the fundamental concept of data integrity should not be taken lightly, and that the consequences of failure can be severe.

But to meet the challenge of successfully implementing a data integrity strategy, what does an organization need to do to ensure its processes meet the required quality standards? Furthermore, how do training, awareness, system design and control, and data management practices ensure success?

2.0 What is Data?
Data integrity is not restricted to electronic data. The MHRA definition applies to all kinds of data, regardless of whether they are paper-based (manual) or generated within an electronic system.

To truly understand the regulatory requirements, it’s vital to establish a basic terminology. The MHRA defines “data” as information derived or obtained from raw data (e.g., a reported analytical result), while “metadata” is defined as the attributes of other data that provide context and meaning. Consequently, metadata describe the structure, data elements, interrelationships, and other characteristics of data.

3.0 Failure
To meet the requirements for data integrity, GMP facilities need to exercise discretion during implementation of both organizational and technical controls. The extent of the controls should be in line with the criticality of the data being generated and the complexity of the system or process being used.

To establish that data are trustworthy, or have not been tampered with or manipulated, the MHRA requires that data are:

  • Attributable to the person generating the data
  • Legible and permanent
  • Contemporaneous
  • An original record (or “true copy”)
  • Accurate

4.0 Fraud
While these requirements may seem simple enough, there is another common misunderstanding about data integrity. Deliberate acts of fraud, falsification and/or provision of incorrect information are often considered to be the only causes of data integrity failures. This may not always be the case. Although fraud is a concern and data failure is one of the most obvious root causes of regulatory problems, data integrity breaches are even more difficult to identify, yet they are equally harmful if caused by an incorrect system configuration or poor system controls. Failure to meet regulatory requirements has serious ramifications and has in many cases around the world resulted in severe actions from regulatory agencies.

After developing a proper understanding of data integrity definitions, expectations, and consequences of failure, it is critical to understand steps for ensuring data integrity.

5.0 Responsible Integrity
GMP-based facilities and analytical laboratories need to develop a culture of quality, ethics, and compliance, and establish a result-based system of accountability. In such a climate it is crucial that employees at all levels in an organization, whether operational, quality or manufacturing staff, clearly understand their responsibilities and are comfortable and confident that they are able to escalate concerns in any part of the organization before they become significant issues. Such an approach starts at the basic level; for example, each employee should recognize that s/he is accountable for his or her own signatures. However, in order to be successful, a positive culture designed to empower individual employees to report issues and recognize opportunities for improvements needs to be established at the senior leadership levels. In contrast, a culture of fear will only increase the potential for data manipulation and the risk of fraud and data integrity failures.

6.0 Importance of Training
Training is essential for the quality and accuracy of data integrity practices. Internal quality auditors need to be experienced and competent in detecting data integrity deficiencies, and data verification activities must be part of the audit process.

Manufacturing personnel and/or technical laboratory staff must also have a complete and comprehensive understanding and appreciation for the procedures and policies that govern and secure data integrity. Because deviations will occur, as no facility is event-free, appropriately trained staff should be able to reinforce data integrity policies and procedures so as to significantly minimize their impact in the event that such a departure from protocol occurs.

7.0 Human Error
Although technical controls greatly reduce human error, the manner in which data is to be generated dictates the data integrity risk. Paper-based manual observations usually provide more visibility to potential data integrity risks than a configurable, complex, computer-based system.  However, because failure with manual recordings does exist, all data must be recorded in real time directly onto the GMP record. These records also need to be controlled by issuance and reconciliation procedures for workbooks, batch records, and notebooks.

8.0 Traceability
Laboratory equipment and systems need be configured appropriately to enable traceability to the employee generating the data, to enable access to the original data (source data), and to provide visibility of any data changes and reasons for such changes. This can be accomplished by following a set of simple guidelines:

  • Enable audit trails on systems.
  • Limit system administrator access to a few distinct individuals. The number of administrators should take into account the size and nature of the organization.
  • Remove the ability for laboratory personnel to delete, overwrite, copy, alter or in any way manipulate data.
  • Ensure that each employee has a unique ID and accompanying password for the system.
  • Upgrade the software to ensure it is compliant with the Food and Drug Administration’s 21 CFR Part 11 and the European Medicine Agency’s Guidelines to Good Manufacturing Practice Annex 11. 

9.0 Conclusions
To meet regulatory requirements, GMP organizations need to establish robust and sound programs that protect the data life cycle. Failure in just one area compromises the data integrity. Successful preservation of the data life cycle can only be achieved in organizations where a culture of quality, ethics and accountability is firmly established, a robust training program is employed, and organizational and technical controls are in place.


June 10, 2016 | Corresponding Author: Doug Chambers | doi: 10.14229/jadc.2016.06.01.001

Received: Aril 28, 2016 | Published online June 10, 2016 | This article has been peer reviewed by an independent editorial review board.

Featured Image: Data Courtesy: © University of Cambridge (UK)/Automatic Statistician. Used with Permission.

Creative Commons License
This work is published by InPress Media Group, LLC (When Data Integrity Takes Center Stage) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Last Editorial Review: June 10, 2016

Add to Flipboard Magazine.


The post When Data Integrity Takes Center Stage appeared first on ADC Review.

Evolving CMC Analytical Techniques for Biopharmaceuticals

$
0
0

1.0 Abstract
During the (early) preclinical drug development process as well as manufacturing of biopharmaceutical (protein) products, analysis and characterization are crucial in gaining a better understanding of the physical and chemical properties of various materials. These properties can have an impact on the manufacturability as well as the performance, potential for metabolism, stability and appearance of a specific medicinal product. Hence, properly characterizing these products is essential for a drug candidate to move from drug development to regulatory approval and, finally, the clinic.

In recent years, complex biopharmaceutical drugs and biologics have evolved into mainstream therapeutics. The manufacturing of these compounds, including monoclonal antibodies, bispecifics, antibody-drug conjugates (ADCs), recombinant and other therapeutic proteins, require extensive analytical and comprehensive characterization using a variety of techniques, including non-compendial, and sometimes an intricate quality control methodology, to confirm manufacturing consistency and product quality.

Because biopharmaceuticals and biologics exhibit highly diverse structures and broad biological activities, a study of these agents is a relatively complex process requiring sophisticated analytical techniques. Furthermore, in addition to these complexities, regulatory expectations to better understand product impurities and degradants in biopharmaceutical products continue to increase.

As a result, many drug developers may find that their current global chemistry, manufacturing, and control (CMC) systems are quickly becoming obsolete. Consequently, new, highly sensitive and specific technologies are becoming the new normal.

Keywords: Biopharmaceutical analysis, Characterization, Protein therapeutics, Bioanalytical methods, Structure and function, Physical and Chemical properties


2.0 New Analytical Approaches
The field of monoclonal antibodies, launched with Köhler and Milstein’s initial study published in 1975 of a method to produce fully intact murine IgG antibodies, has created a new area of the development of novel medicinal products. [1] In the more than three decades since the initial development of monoclonal antibodies, chimerization, humanization and fully human antibody technology followed. [2]

Subsequent to the growth of antibody-based products, new technologies have emerged for creating modified forms of antibodies, including antibody fragments, antibody-drug conjugates or ADCs as well as bi- and multi-specific antibodies.

In the development of these next-generation medicinal compounds, a better understanding of currently approved ADCs and novel site-specific bio-conjugation technologies is required. For example, a better analytical understanding of the structure-activity relationship accelerates the discovery and development of the next-generation ADCs with defined and homogeneous compositions.

Analytical methods and characterization for novel biopharmaceuticals and biologics involve complex, multi-faceted procedures stretching from early (pre-) clinical drug discovery to clinical development, regulatory approval and, finally, market entry.

Most of this work takes place during the early development phase, and is vital to help understand the influence of process changes, measured against an established reference standard.

3.0 Protein Therapeutics
Biopharmaceutical therapeutics are inherently challenging to characterize because of their complexity and natural heterogeneity. Therefore, appropriate and complete analysis ensures meaningful and reliable characterization, and provides the data required to satisfy regulatory requirements concerning product identity, (im)purity, concentration, potency, stability, safety and overall quality.

Methods used to characterize primary and higher-order structures (including techniques to determine protein sequence, posttranslational modifications, folding and aggregation) and protein concentration (including amino acid analysis, intrinsic protein absorbance and colorimetric methods) are vital to avoid aberrant results for key attributes that could, potentially, raise quality issues.

In addition, characterization and analysis of biopharmaceutical proteins also involves product- and process-related determination of impurities, which may compromise the safety of the protein therapeutics. This includes various assays (including bioassays and noncell-based binding assays) for determining the functional activity of proteins, which may be indicative of potency.

Overall, a complete approach to characterization helps developers to be confident that their product meets regulatory requirements as well as product quality and safety standards.

4.0 Changing Technologies
While spectrophotometric analyses of proteins are commonly used, there may be a number of important reasons to change analytical methods and characterization techniques.[a]

The reasons may include:

  1. New techniques may allow for better characterization, making it possible to follow the stability of specific molecules and proteins, as well as contribute to deeper understanding of them. New techniques may include imaging, capillary-electrophoresis, ultra-high-resolution mass spectrometry, micro-flow imaging (MFI), etc.; (Figure 1.0)
  2. Improved technologies to replace legacy methods. Examples include using ultra-high-performance liquid chromatography (UHPLC), a relatively new technique giving new possibilities in liquid chromatography, instead of high-performance liquid chromatography (HPLC) and Capillary Western (WES), a quantitative western blot produced by a protein simple, which offers increased precision and specificity versus ELISA; (Table 1.0)
  3. Formulation and process changes may occur in the early stages of drug development. Even through Clinical Trial phase I and phase II, there may be formulation or process changes, which may require additional or new analytical methods;
  4. There may an interfering compound within the formulation. One example is the use of surfactants[b], such as Polysorbate 80[c] (also known as PS80) which may interfere with the reverse-phase method. To be certain about stability, when observing new degradants, it may be required to use a new method that will resolve and quantify the new analytes;
  5. There are specific regulatory requirements that apply to approved products, including the expectation of periodic method assessment for improvement;
  6. Many techniques allow for strategic business decisions, resulting in high throughput with low costs. This largely depends on how many lots and stability studies are necessary. In turn, this may directly impact the costs associated with the regulatory approval process of products being developed.

Figure 1.0 A number of recent methods developed in the past years allowing scientists to look at antibodies much more closely include ultra-high resolution mass spectrometry (UHR-MS), multiple reaction monitoring (MRM), mass spectrometry, ultra-performance liquid chromatography (UPLC)[d] analysis of glycans (both by MS and HBLC fluorescence), microfluid imaging analysis and automated Western (WES).
5.0 Regulatory Implications
The regulatory process established by the U.S. Food and Drug Administration (FDA) requires that each New Drug Applications (NDA) and Abbreviated New Drug Application (ANDA) includes the analytical procedures necessary to ensure the identity, strength, quality, purity and potency of the drug substance and drug product. [3][4] Furthermore, each Therapeutic Biologic Application (BLA) needs to include a full description of the manufacturing process. This includes analytical procedures that demonstrate that the manufactured product meets prescribed standards of identity, quality, safety, purity and potency. [5]

The analytical procedures and methods validation for drugs and biologics, Guidance for Industry, states that, over the life cycle of a medicinal product, new information (e.g., a better understanding of product characteristics) may warrant the development and validation of a new or alternative analytical method. [6]

But analytical methods should not be considered to be “locked down” or validated once clinical trial phase I or phase II is reached. To fully understand the biopharmaceutical products involved, the FDA requires scientists to consider new or alternative analytical technologies, even after completion of the drug approval process.

The FDA also requires that drug developers and manufacturers periodically evaluate the appropriateness of an analytical method and consider new or alternative methods. To make this process simpler and more robust, and in anticipation of life cycle changes in the analytical process, an appropriate number of drug samples should be archived to allow for comparative studies. These samples must not only be put away for stability studies, but a reasonable number of samples should be archived at the proper temperature (typically at -80 degrees for a biopharmaceutical sample) to be used for crossover and comparability studies. This is critical to smoothing the pathway for change from one analytical method to another. [6]

6.0 Regulatory Reporting Requirements
Establishing a regulatory framework, the FDA sets “safety reporting requirements for human drugs and biological products” that include mandatory reporting of any change in analytical methodology, and describes—among other things—a developer’s responsibilities for reviewing information relevant to the safety of an investigational drug and their responsibilities for notifying FDA. These reporting guidelines cover minor, medium and major changes. (See: Table 2.0)


6.1 Minor Changes
Minor changes are those within the “validated change of the analytical method.” For example, when a validated chromatography method for a column temperature range of 10° to 40° change from a nominal of 30° to 35°, this would be considered to be a minor change. While this change can be submitted as part of the annual report, it is still required that the applicant reports the change to the agency. [8]

The Guideline for Industry detailing the requirements for the annual report stipulates that properly reporting post-approval manufacturing changes must be made in compliance with current Good Manufacturing Practice (cGMP). [9]

6.2 Moderate Changes
At the moderate level, the validated range is exceeded in certain parameters. Such a change may have an adverse effect on the identity, strength, quality, purity or potency of the drug product. Using chromatography as an example, this could be a change in mobile phase from acetonitrile to methanol, or a change in the actual gradient of a method. Such a change has more stringent requirements and required validation of this new method in additional comparability studies.

6.3 Major Changes
Major changes include modifications that establish a new analytical method, eliminate a current method (substituting one method for another rather than adding a new method), or delete or change the acceptance criteria for a stability protocol.

At the major level, there are substantial changes to the analytical method. For example, a major change includes switching from UV detection to mass spec (MS-) detection. Such a change must be validated with a formal, highly statistical comparability study designed to show any differences, or lack thereof.

In case of a major change, developers are also required to submit and receive FDA approval of a supplemental application to the original NDA or ANDA. In what is known as a Prior Approval Supplement (PAS), a major change needs to be reported and include a detailed description of the proposed change, which products were involved, a description of the new method, the validation protocols and data, a description of the changes to evaluate the effect of the change, a comparability report, a description of the statistical method of evaluation and a final study report. [8][9]

While a PAS is generally required for approved drugs, it also sets expectations for early-phase products. Although they are not covered under formal CFR regulations, the FDA does, in fact, expect at least a similar study to be performed when a drug is in clinical trial phase I, II or III.


7.0 Comparability Study
The comparability process is critical. The FDA requires that a manufacturer carefully assess manufacturing changes and evaluate the product resulting from these changes for comparability to the pre-existing product. In such a case, the goal is to show that a new analytical method is superior to the original method. [10]

Figure 2.0: Numerous new analytical approaches and characterization methodologies have emerged that are designed to (better) analyze biopharmaceuticals, allowing scientists to look at monoclonal antibodies much more closely. The FDA expects that applicants use novel methods in lieu of older methods. [Click here for table]
Based on the guideline for industry, determinations of product comparability may be based on chemical, physical and biological assays and, in some cases, other nonclinical data. This requires referring to archived samples from historical batches, and whether those are included in the Investigational New Drug (IND) submission, clinical or registration batches. [10]

This is a critical part of the process, because developers need to show that a new method is more sensitive or selective, and is therefore detecting and quantifying impurities or degradants that were always present, but not seen by the current (existing) method and, as a result of a change of methodology, can now be better monitored.


8.0 Comparability Design
A well-planned comparability design will assess the effect of CMC changes, allowing the FDA to determine if a specified change can be reported in a category lower than the category for the same change. Appropriate samples should be included, allowing a comparison of the ability of the new and original method to detect relevant product variants and degradation. This approach provides sufficient information for the FDA to determine whether the potential for an adverse effect on the product can be adequately evaluated. [11] [12]

To be adequate, the number of batches should be statistically relevant. The guidance to industry emphasizes the use of a trained statistician. The reason is that, while the FDA recognizes that a comparability design is less complicated than a clinical trial, it requires a statistician to design a robust program clearly showing differences between methods. [11]


9.0 Concerns
There are a number of concerns associated with the development and the implementation of new methods designed to replace a current (existing) method. The biggest question is whether the results of the analytical methods will be different.

In general, the expectation is that by changing analytical methods, there is indeed a fairly high probability of getting different results. Hence, if there is a change to an improved method, the ideal scenario is a change in sensitivity or specificity, which would therefore show an additional or higher level of impurities or degradants.

Another concern is assay bias. For the statistical analysis of data, it is important that both the new and old data are within specification. Based on the guidelines to industry, the cause of bias must be examined to see if such bias has an effect on the data. Hence, analyzing archived samples to show that impurities and degradants were always present is crucial.

For products that have already been marketed, there is a concern that new impurities may result in the requirement for new, additional, clinical work. If there are archived samples to show that the materials were always there, the clinical data will still prove that the drug is safe and efficacious, and that the newly measured impurities and degradants could not be measured with the previous method.

If such is the case, statistical analysis is still necessary to justify the bias; however, there is no need for additional clinical work. The new process is just implemented to compare and show an improved method. [11][12]


10.0 Conclusion
Preclinical drug discovery and development process, as well as manufacturing of biopharmaceutical products, involves a complicated process including rigorous (experimental) scientific study. By following regulatory guidelines, successful advancement of novel drug candidates requires early planning, setting aside archived samples, having a very tight validation report and study and, finally, having a well-planned, statistically rigorous comparability study.

If these steps are present, there is a high probability of a smooth regulatory process. Drug developers may expect to receive approval to use the new analytical method for a marketed product. And if the product is in a preapproval process, the expectation is that there is no need for additional questions from the agency. 


Footnotes
[a]UV-VIS spectroscopy (ultraviolet and visible spectroscopy) is typically used for the determination of protein concentration by either a dye-binding assay or by determining the absorption of a solution of a protein at one or more wavelengths in the near UV region (260-280 nm). Circular dichroism is another spectroscopic method used in the early-phase characterization of biopharmaceuticals (proteins).
[b]Surfactants are compounds that lower the surface or interfacial tension between two liquids.
[c]Polyoxyethylene-sorbitan-20-monooleate
[d] UHPLC and UPLC (Waters Corp.) allow for better separation of peptide mapping
[e]CBE-30 is similar to Changes Being Effected (CBE) and involves a filing with the FDA to gain approval of a moderate change (this may include a change that has a moderate potential to have an adverse effect on the identity, strength, quality, purity or potency of the drug product, as these factors may relate to the safety or effectiveness of the drug product. Based on the CBE-30, the FDA has 30 days to respond prior to implementation of any change. If a filer does not receive a reply from the FDA within 30 days, it is assumed that a change is approved.
[f]Chemistry, Manufacturing and Controls (CMC) is renamed to Pharmaceutical Quality/CMC


October 21, 2016 | Corresponding Author: Glenn Petrie | doi: 10.14229/jadc.2016.10.21.001

Received: August 19, 2016 | Published online October 21, 2016 | This article has been submitted for peer reviewed by an independent editorial review board.

Featured Image: Pharmaceutical scientific researchers analyzing liquid chromatography data; Pharmaceutical industry manufacturing laboratory Courtesy: © 2016 Fotolia. Used with Permission.

Creative Commons License
This work is published by InPress Media Group, LLC (Evolving CMC Analytical Techniques for Biopharmaceuticals) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Last Editorial Review: October 24, 2016

Add to Flipboard Magazine.


The post Evolving CMC Analytical Techniques for Biopharmaceuticals appeared first on ADC Review.

Drugs and Drug Hunters

$
0
0

All products of the creative process are a reflection of the individual makeup of the inventors.  A painting by Mark Rothko looks like a Rothko painting.  A symphony by Wolfgang Amadeus Mozart does not sound like the works of other composers.  A Harry Potter story is will never be confused with The Hardy Boys or Nancy DrewSteve Jobs made sure that no one has trouble distinguishing an Apple computer from a PC. Beverly Sills, the opera soprano, was world renown not only because of her musical skills and technical mastery, but because her voice had a very distinctive and pleasant timbre.

It should not be surprising therefore that drugs, which are also products of human creativity and ingenuity, reflect the character of the people who discover them.

Soil Day
Soil microorganisms have been explored as sources for novel drugs ever since 1928, the year when Alexander Fleming took note of an odd fungal contaminant in one of his experiments that turned out to produce penicillin.  Soon virtually all common soil microorganisms had been screened for the production of clinically valuable yielding the antibiotics we typically rely on today: penicillins, cephalosporins, tetracycline, erythromycin etc.  But by the 1980’s new antibiotics were getting harder and harder to find and workers began pursuing the idea that microbes growing in exotic places and exotic ecosystems would likely be the best sources to produce unusual and novel therapeutic leads.  To encourage their employees to prospect for such microbes, most pharmaceutical research laboratories instituted the practice of “soil days”.  The idea was that, if employees who went to unusual places on their vacations were willing to collect a few dozen soil samples while there, these employees would be awarded a “soil day”, a free day of extra vacation time.

It was a great deal for both parties.  Collecting the soil samples was trivial, basically bending down and scooping a tablespoon of dirt into a tiny zip lock bag.  The employee got a vacation day for minimal work and the pharmaceutical company gained access to some exotic samples.  I did it myself.  One year, when my spouse and I went on a hiking trip to Switzerland, I took a soil kit with me and picked up samples whenever we stopped at what looked to be an interesting place.   One day we were hiking along the shore of an alpine lake and I noticed a beach with very fine powdery off white sand.  It looked unusual so I bent down and scooped some up.

I didn’t think about this sample again until we were going through customs at Newark Airport.  It’s against the law for individuals to bring food, farm products and soil samples into the United States, but you can import soils with a special permit.  I was always scrupulous to make sure that I had my permit in the collecting kit before I left the country.  So when the customs people looked at my luggage I had no worries, at least until the official started to look very suspiciously at my little bag of fine white powder.   I whipped out my permit, but I think the customs man was not completely reassured until he saw that, along with my little bag of white powder, there were dozens of similar zip lock bags with contents ranging in appearance from simple garden dirt to dried mud and slimy pond muck.  I also think my saying that I was hoping that one of these samples might lead to a novel treatment for AIDS related infections helped.  He let us in through customs and I went right to lab where we found that my samples, collected with much hope and expectation, produced zero.

Fabulously potent
In the mid 1980’s a Lederle Lab scientist was on vacation in Texas in and took a chalky soil sample from an area near the town of Kerrville that the locals call the “calichi pits”.  Back in the lab a strain of the Actinomycete bacteria, Micromonospora echinospora, was isolated from this soil sample and was found to produce a novel antibiotic later named calicheamicin [1].  Calicheamicin is fabulously potent.  The good news was that only a couple of calicheamicin molecules could easily kill a cancer cell (almost totally unheard of in efficacy and a thousand times more potent than some of the best clinical antitumor drugs, like adriamycin).  The bad news was that only a couple of calicheamicin molecules could also easily kill a normal cell.  In fact, calicheamicin kills everything it touches: bacteria, fungi and viruses, eukaryotic cells and eukaryotic organisms like mice and people.

Studies on calicheamicin by George Ellestad and Nada Zein, who among other scientists at at Lederle Laboratories*, showed why calicheamicin was so fabulously potent: it had a highly unusual mode of action [2].  Calicheamicin acts as a “chemical nuclease”.  Calicheamicin is similar to an enzyme (it’s really a chemical catalyst); it is able to repeatedly bind to DNA and make double strand breaks.  Exposure to just a few molecules of calichaemicin can chop an entire genome into hamburger.  But this finding also raised a question: how is the Actinomycete bacterium that produces calicheamicin is able to resist its toxicity?  That question was answered by studies in the early 2000’s [3], which showed that Micromonospora echinospora  produces a protective  protein called CalC.  Calichaemicin tightly binds to CalC, which leads to the destruction of both the CalC protein and the bound calichaemicin.  As a result, any molecules of calichaemicin that remain within the producing organism are rapidly destroyed before they can do any damage.

The therapeutic goal for calicheamicin was therefore to devise some sort of guided missile that could selectively deliver lethal calicheamicin to cancer cells.  This was a proven concept.  One hundred years earlier the German scientist, Paul Ehrlich, had devised the world’s first effective treatment for syphilis by attaching toxic arsenic to a Treponema pallidum binding dye: missile and warhead.

But one hundred years later scientists were able to come up with a better missile than an aniline dye.  The obvious modern missile was going to be some type of highly selective monoclonal antibody.  The problem was that the monoclonal antibody carrier had to be designed so as not release any toxic calicheamicin until it reached the cancer cell.  Then, upon reaching the cancer cell, it had to dump its entire toxic payload.  Not easy.  It took ten years of hard work to get there, resulting in the development of gemtuzumab ozogamicin (Mylotarg®; Pfizer/Wyeth) [4].  The gemtuzumab ozogamicin antibody binds CD33, a myeloid-specific cell surface protein that targets the calicheamicin for the treatment of acute myeloid leukemia (AML).  But frustrating everyone involved, gemtuzumab ozogamicin did not turn out to be the magic bullet.  Ten years post launch gemtuzumab ozogamicin was removed from the market in the United States at the request of the U.S. Food and Drug Administration (FDA).  After years of clinical experience the FDA concluded that the drug was still too toxic, although it is still being used in Japan and studies continue to support the re-approval of this agent [5].

Needed: A Tremendous Dose of Luck
Among the members of the calicheamicin team, George Ellestad was a very special kind of person.  George’s hobby was trekking in the Himalayas in Nepal.  (No, none of the soil samples he brought back from Nepal ever produced a drug).   He was a natural scientific leader, but avoided any and all formal scientific management roles.  He was a pure bench scientist.  When George spoke up at meetings he was so authoritative that everyone would immediately focus and listen carefully to him, listen much more carefully than they would listen to the big management bosses who also spoke at these meetings.  People would mistakenly characterize George as a high level manager.  “No”, he would insist, “I’m just high bench”.  George is one of these unsung heroes of industrial science – someone who was effective way beyond his rank but not adequately formally recognized.

His colleague, Nada Zein was a passionate scientist whose lifelong goal was to pursue and discover the truth.  She never let her career goals get in the way of that pursuit, exemplifying a quote from the famous 20th century philosopher, Ludwig Wittgestein: “Ambition is the death of thought.”  Nada was passionately thoughtful.  I got to know Nada in no small part because she married a terrific chemist collaborator of mine named Doug Phillipson, with whom I had worked with in a previous job.  If it seems like all drug hunters know one another, that’s because it’s true.  We’re an amazingly small community.

Doug and I worked on a number of drug projects together, with me leading the biology and Doug leading the chemistry.  We were also good friends outside the lab.  Doug sold me his favorite car, a little white 1993 Mazda Miata, which he had meticulously kept in near brand new condition but could not financially justify taking with him when he moved from New Jersey to California in the late 1990’s.  Over 20 years later the car is still running just fine.  Doug was totally trustworthy both as a scientific collaborator and a used car salesman.

Squibb Building, Brooklyn, New York, NY
Photo 1.0: Squibb HQ Building, Brooklyn, New York, NY (1968).

One project that Doug and I worked on together was the development of a novel antifungal drug.  At the time the AIDS crisis was in full swing and, prior to the development of effective antiviral drugs, fungal infections were producing 70% of the morbidity in AIDS patients.  Squibb** management felt that only one therapeutic approach was justified: to target the enzyme that was hit by the azole antifungals, drugs like miconazole (sold a Micatin Cream for vaginitis) that were first discovered in the late 1960’s.  And after years of toil by a large group of Squibb scientists we finally found what management had asked for: lanomycin, a structurally novel antibiotic acting on the azole target.  However, just as the discovery was made management abruptly reversed themselves and withdrew all support from our lanomycin lead.  The Greek gods should have sentenced Sisyphus to a lifetime of drug hunting.  Doug tried to keep the project going under the radar without management approval, but alas, with only minuscule resources lanomycin was never going to be turned into an FDA approved drug.   (Ultimately the pneumocandin antifungals, compounds with a novel anti-cell wall mechanism of action, turned out to be the answer.)

Nada pursued a number of scientific initiatives after her work on calicheamicin, moving from Lederle to the Genomics Institute of the Novartis Research Foundation in La Jolla, California.  At the Novartis Foundation in La Jolla she was seeking to mate drug ligands to receptors, but soon came to the conclusion that this goal would be elusive due to management and leadership issues in the pharmaceutical industry.  So she decided instead to study for a master’s degree in social work and for years ever since has been working as a marriage counselor.  Counter-intuitively it appears that it was easier for her to bind spouses together than drug ligands and their targets.  Go know.

The discovery of new medicines is brought forward by the dreams, aspirations and creative spirit of all those involved.   But despite all the great ideas and hard work you still need a tremendous dose of luck.


* Lederle Laboratories was purchased from American Cyanamid in 1994 by American Home Products Corp., and the Pearl River operation was renamed Wyeth-Ayerst. American Home Products renamed itself Wyeth in 2002 and became a piece of Pfizer in 2009. Today, the Pearl River campus is one of Pfizer’s five primary research sites and a central hub for Vaccine and BioTherapeutics research. The company also manufactures a number of oncology drugs at the Pearl River, including Mylotarg®.

** Squibb Corporation was founded in 1858 by Edward Robinson Squibb in Brooklyn, New York, New York. E.R Squibb was known as a vigorous advocate of quality control and high purity standards within the fledgling pharmaceutical industry of his time, at one point self-publishing an alternative to the U.S. Pharmacopeia (Squibb’s Ephemeris of Materia Medica) after he was unable to convince the American Medical Association to incorporate higher purity standards. References to the Materia Medica, Squibb products, and Edward Squibb’s own opinion on the utility and best method of preparation for various drugs are found in many medical papers and journals of the late 1800s.  Squibb Corporation served as a major supplier of medical goods to the Union Army during the American Civil War, providing portable medical kits containing morphine, surgical anesthetics, and quinine for the treatment of malaria (which was endemic in most of the eastern United States at that time). Squibb merged in 1989 with Bristol-Myers (founded in 1887 by Hamilton College graduates William McLaren Bristol and John Ripley Myers) to form Bristol-Myers Squibb.

January 17, 2017 | Corresponding Author: Donald R. Kirsch | DOI: 10.14229/jadc.2017.17.01.001

Disclosures: Donald R. Kirsch is the co-author of “The Drug Hunters: The Improbable Quest to Discover New Medicines.

Received: January 15, 2017 | Published online January 17, 2016 | This article has been submitted for peer review by an independent editorial review board.

Last Editorial Review: January 17, 2016

Featured Image: Working with fluorescent microscope. Courtesy: © Fotolia. Used with permission. Photo 1.0: Squibb HQ Building, Brooklyn, New York (1968). Courtesy: © Brooklyn Public Library. used with permission.

Creative Commons License
This work is published by InPress Media Group, LLC (Drugs and Drug Hunters) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Copyright © 2017 InPress Media Group. All rights reserved. Republication or redistribution of InPress Media Group content, including by framing or similar means, is expressly prohibited without the prior written consent of InPress Media Group. InPress Media Group shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon. ADC Review / Journal of Antibody-drug Conjugates is a registered trademarks and trademarks of InPress Media Group around the world.

The post Drugs and Drug Hunters appeared first on ADC Review.

Antibody-Drug Conjugates: Manufacturing Challenges and Trends

$
0
0

Antibody-drug conjugates (ADCs), a form of Immuno-conjugate or bio-conjugate, are an emerging class of medicines designed for high-specificity targeting and destruction of cancer cells. The mechanism of action is targeted delivery of a cytotoxic agent to the cancer cell via monoclonal antibody targeting of a specific cell surface marker. [1][2][3][4]

Upon binding, a biochemical reaction activates internalization of the ADC into the cell cytoplasm, where the drug becomes active, killing the cancer cell. [5] The ultimate advance with ADC therapeutics is that targeting and release of the drug specifically within the cancer cell means that healthy cells are not adversely affected and cancer cells can be more effectively destroyed. Success in ADC therapeutics stems from a deep understanding around each of the trilogy ‘Antibody- Linker- Payload (drug)’ technologies, with a complementary optimization of all three to generate an effective and potent ADC. [3][4]

Whole IgG has been the gold standard in ADC targeting to date; however, innovation in the field is bearing new formats, including engineered and smaller antibody and antibody-like particles such as Fab fragments, single-chain Fv, and antibody-like scaffolds. [2] A crucial element in ADC complexes is the linking technology. The linker unit may be cleavable or non-cleavable from the drug unit, which affects drug activity and availability.

With non-cleavable ADCs, the linker unit remains attached to the drug, which mitigates externalization and the resulting side effect of the drug entering healthy neighboring cells. With cleavable ADCs, the drug is completely cleaved from the linker unit upon internalization, the antibody is degraded to its amino acid form, and the entire complex becomes active drug. Innovation in linking technology aspires to improve the coupling of payloads as well as to improve cleavage reactions, allowing improvements in payload delivery. The majority of ADC payloads are small molecules, which act via disruption of microtubules or inducing DNA damage. [2][4][6]

Market Insights
Although the first ADC was approved in 2001, it took almost a decade before the next ADC was approved. As of today, only Adcetris® and Kadcyla® are commercially available globally (Zevalin® has been approved in China only). Pioneers Pfizer/Wyeth withdrew Mylotarg® in 2010 after safety issues were observed during a comparative clinical trial.

The ADC market was worth approximately USD 900 million in 2015 with just two approved drugs [7], and its potential remains very large. There are 45 molecules in development, representing more than 300 projects in development: 42% in preclinical stage; 19% in Phase 1; 11% in Phase II; and 3% in Phase III. [8] Over the next decade, we believe that 5-10 new ADC commercial launches will occur, targeting diseases in the areas of oncology (predominantly solid tumors) [9] and immunology (90% of the pipeline).

With 45 ADC projects in development and their expertise in developing and launching Adcetris®, Seattle Genetics is expected to maintain leadership in this space. The company quadrupled the size of their ADC pipeline through multiple licensing deals. [8]

To date, 182 companies (108 in US; 53 in EU; 16 in Asia; 5 in ROW) are actively developing ADCs.[8] Despite this highly competitive market, however, the development of ADCs remains slow and will remain so in the coming years. An estimated 70%-80% of ADC manufacturing is outsourced. Considering the challenges in the development of linkers and payloads, [10] and the fact that only a few contract manufacturing organizations (CMOs) have the capabilities,  the market remains open for new players to emerge who can overcome the manufacturing challenges in this space.

Photo 1.0: Manufacturing scientist preparing for conjugation step used in ADC production.

Technical and Manufacturing Challenges
The biggest technical challenge associated with any drug manufacturing process is to provide consistently efficacious drug product, which is of the required purity and is safe from environmental and process related contamination. This challenge must be accomplished in a manner that protects employees, operators and the general environment from the harmful substances inherent in the process, and at a cost that makes the final drug marketable.

The starting point for ADC manufacturing is the parent monoclonal antibody (mAb). Often the supply of mAbs of suitable quality for therapeutic purposes is taken for granted due to the successful developments in purification templates and platform processes in previous years. [11][12]

It is not within the scope of this article to discuss future developments in mAb processing, but it is interesting to consider how growth trends in the ADC field will modify mAb manufacturing by raising a new set of priorities for mAb processes.  Many manufacturers already design and screen mAbs at an early stage for “manufacturability,” which is the set of physico-chemical properties that will make the molecule robust enough for the manufacturing environment, formulation and dosage requirements. [13] Such considerations include molecule pI, glycosylation and surface reactive sites as well as elimination of structural motifs known to potentiate aggregation or instability. The trend is for mAbs destined for ADC production to be further optimized for robustness against the more demanding process conditions inherent in ADC processes. Such enhanced requirements are likely to add to the existing cost and convenience drivers to stimulate further technological developments in mAb expression and processing.

Future ADCs are likely to utilize novel biological components, such as minimized antibody derived binding moieties (scFv, nanobodies, domain antibodies etc) and molecules with bispecific target binding, which require adaption of the conventional mAb DSP platform or the adoption of new purification templates. [14]

Photo 2.0: Manufacturing scientist working in a glove box to weigh HPAPIs used in ADC production.

ADC process development is complicated by the need to optimize additional process steps that are not present in conventional mAb manufacturing e.g. the antibody-drug conjugation reaction and subsequent drug substance purification. The drug-antibody-ratio (DAR) is a critical quality attribute for the ADC as it defines potency and therapeutic index. [15] The extent of the derivatization of the mAb can also, in extreme cases, adversely affect its biological and pharmacological properties, leading to poor tolerance, less effective targeting and/or stability problems. The choice of conjugation chemistry and linker are critical, and a thorough optimization of the reaction parameters is necessary.  Clinically approved ADCs have used conjugation chemistries with broad group specificity, targeting naturally occurring amine (lysine) or thiol (cysteine) amino acid side chains. [16] Thus, multiple and variable drug incorporation into the ADC is possible, but will have to be controlled to maximize efficacy and to meet the regulatory requirements regarding drug entity definition. [17]

Additional considerations for the control and optimization of the conjugation chemistry include the possible generation of mAb aggregates and drug/linker side reactions, which could lead to subsequent purification and analytical challenges. Many reaction conditions can be controlled and it is necessary to have a thorough understanding of the critical parameters and how their possible interactions affect DAR and final ADC quality.  It is common to assess the critical reaction parameters using a statistical design of experiments . High-throughput screening methods are also advantageous, but require additional capital investment and the provision of complementary high-throughput analytical facilities to ensure maximum efficiency.  DAR can be monitored using RP-HPLC, HIC or analytical IEX, and size exclusion chromatography can be used to detect oligomer and aggregate formation.  More recent innovations point toward improvements in site-specific conjugation chemistries targeting well-characterized and unique sites within the mAb. Such sites may be designed into the mAb structure de-novo by incorporating non-natural amino acids into the protein structure. [18]

Photo 3.0: Purification of a mAb at the Biodevelopment Center in Martillac, France.

The manufacturing challenges inherent with ADCs are attributable to the additional process, safety and analytical requirements conferred by conjugation of the biologic to the highly active cytotoxic component.

An ADC must be manufactured in a Current Good Manufacturing Process (cGMP) aseptic environment whilst also ensuring containment of the highly toxic drug compounds to protect operators and the wider environment, which presents significant operational difficulties. The highest risk of operator/environmental exposure comes with the use of powdered cytotoxic reagents. These operations require the use of low pressure isolators and advanced personal protection equipment with some operations being designated as Safebridge® containment Category 4. [19] Integration of these precautions with the aseptic manufacturing environment requires integrated facility and equipment design/engineering and a multi-disciplinary approach. [20]

Manipulation of the cytotoxic materials in liquid format is facilitated by the use of closed systems where all product contact surfaces are single use and disposable. Not all single use components will be routinely tested for resistance against the organic solvents used to solubilize the hydrophobic drugs used in ADC manufacture and thus dedicated extractable and leachables testing is typically required.

ADC manufacturing facilities require high capital investment and extensive specialized training of operators, which explains the trend towards its domination by specialized CMOs.[21] As ADC pipelines advance, there may be advantages in developing sites where mAb production and conversion to an ADC can be closely combined.

An additional process development and manufacturing challenge includes the subsequent purification of the required ADC sub-population from the post conjugation reaction mixture.  The reaction mixture will include ADC variants with a range of DAR, unincorporated drug, spacer derivatives and organic solvents. Primary purification can be achieved by utilizing the size differential conferred by the mAb. Tangential flow filtration (TFF) can be used to retain the high molecular weight species whilst the underivatized reaction components can be removed by diafiltration.  This process also offers the opportunity to concentrate the ADC and reduce volume for subsequent downstream steps.  The UF/DF purification will likely require additional optimization (compared to the parent mAb process) because the addition of hydrophobic drug moieties to the mAb may result in reduced stability or solubility and an increased propensity to aggregate. Further purification can be achieved using conventional chromatography modalities, with cation exchange having the potential to resolve both mAb-derived aggregates and ADC species showing extremes of DAR.

Operator and environmental protection is also critical during the purification operations and, wherever possible, closed systems and single use components are used.  The availability of pre-packed scalable chromatography columns and ready to use, single use TFF capsules both improve convenience and reduce risk. Final processing stages will include further TFF (UF/DF) to adjust final formulation and conventional sterile filtration to meet regulatory guidelines.

Photo 4.0: A process engineer packing a chromatography column.

ADC manufacturing presents an exciting challenge to the industry. Progress is dependent not only upon innovative research in medicinal chemistries and biologics, but also on the related support industries that supply partnership in engineering, devices and consumables to ADC manufacturers.  The progress and future promise for ADC therapies is validation of the synergy between these fields.

Future Trends

Manufacturing
A robust pipeline exists for ADCs, with over 45 molecules in development. Seattle Genetics and its partners have applied their ADC technology to more than 200 antibodies and have licensed their technology to several companies. [22]

Key areas for future development include new cytotoxic agents as well as new linkers that are adequately stable and at the same time can be cleaved efficiently to deliver the cytotoxic drug. [23] There also continue to be developments in the areas of manufacturing and scale up for this technology, given the cytotoxicity of the drug and the challenges associated with manufacturing the antibody variants. [14]These capabilities are only owned by a handful of CMOs, particularly the conjugation services.  The CMOs who do offer conjugation and linkage services are heavily reliant on single-use technology and continue to push the industry for advances in that area. [24] As ADC technology advances and continues to gain footing and funding in the biotechnology and financial sectors, the CMO marketplace will likely expand.

Supply Chain
The supply chain of an ADC is highly complex, combining development and manufacturing capabilities in pharma and biopharma, and an utmost analytical skill and capability set. But as most companies/CMOs are specialized on a precisely defined niche, such as high potent or linker technology, the ADC developing company must manage a highly complex supply chain with often up to seven or more partners. Therefore, ADC developing companies are demanding a more integrated supply chain solution with one partner covering the majority – or all – of the supply chain. An increasing number of collaborations, strategic alliances and acquisitions during recent years confirm this trend. Further, CMOs who already offer part of the ADC supply chain are entering the market with investments in new conjugation facilities. [25]

Chemistry
To further advance the efficacy of ADCs, new drug platforms and new or improved linker technologies are in development. Today’s drug conjugation strategies yield heterogeneous conjugates, resulting in a relatively narrow therapeutic window. The cell-killing effects of ADCs are obviously highly dependent on the drug to antibody ratio (DAR); therefore, controlling the drug to antibody ratio is a major focus of the development of new linker technologies. [26][27]

ADCs with improved homogeneity will help to balance cytotoxic effects against the side-effects of a therapy.[28] Finally, companies such as Macrogenics also are focusing efforts on more specifically targeting tumors via dual targeting of tumor antigens through bi-specific ADCs.


February 20, 2017 | Corresponding Author: Julien Zhao | DOI: 10.14229/jadc.2017.21.03.001

Received: February 20, 2017 | Accepted for Publication: March 7, 2017 | March Published online March 21, 2017 |

Last Editorial Review: March 20, 2017

Featured Image: Photo of scientists researching in laboratory. Courtesy: © Fotolia. Used with permission.

Photo 1.0, 2.0, 3.0 and 4.0 Courtesy of Merck KGaA, Darmstadt, Germany © 2017 Merck KGaA, Darmstadt, Germany. All rights reserved.

Creative Commons License
This work is published by InPress Media Group, LLC (Antibody-Drug Conjugates: Manufacturing Challenges and Trends) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Copyright © 2017 InPress Media Group. All rights reserved. Republication or redistribution of InPress Media Group content, including by framing or similar means, is expressly prohibited without the prior written consent of InPress Media Group. InPress Media Group shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon. ADC Review / Journal of Antibody-drug Conjugates is a registered trademarks and trademarks of InPress Media Group around the world.

The post Antibody-Drug Conjugates: Manufacturing Challenges and Trends appeared first on ADC Review.

Trial of High Efficiency TFF Capsule Prototype for ADC Purification

$
0
0

Antibody-drug conjugates (ADCs) are an emerging class of highly targeted cancer therapies in which a monoclonal antibody is chemically conjugated to a cytotoxic drug (payload). These complex biochemical moieties are comprised of three essential components: the monoclonal antibody, the payload, and the

Photo 1.0. Brooke Czapkowski (corresponding author) holding the TFF capsule 0.11 m2 prototype.

linker, which holds the moiety together. Upon targeted recognition of the specific cancer cell receptors, the antibody becomes internalized by the cell, which makes it an effective vehicle for therapy. Once the ADC has been internalized, the cytotoxic drug is released, enabling it to kill the cancerous cell. Common mechanisms of action for these drugs include microtubule inhibition and DNA damage.

Due to the toxic nature of ADC compounds, non-toxic linker payloads (“mimics”) have been designed in order to study and effectively model ADCs. ADC mimics are comparable in the basic structure to the actual cytotoxic ADC, and can be conjugated, purified, and filtered just as normal, cytotoxic ADCs would.

These conjugation reactions require organic solvents and excess equivalents of linker payload (either cytotoxic or mimic). Solvent and excess free drug or mimic are much smaller molecules than the conjugate and can be removed rapidly from the drug substance by diafiltration with tangential flow filtration (TFF).

 The TFF step presents a safety concern to operators due to the high toxicity of the payload and the open cassette format of traditional TFF devices. TFF cassettes only seal when installed in a compression holder, thus leaving a risk of operator exposure to process fluid after filter removal for storage or disposal. A new TFF device format in development comprises a self-enclosed, pre-sterilized capsule that does not require a holder, thus dramatically improving the ease of use and safety of the TFF operation. The new device also provides high efficiency comparable to TFF cassettes, allowing for use of similar pumps and membrane areas. This simplifies conversion between the two formats for process development and production.

A proprietary, purified monoclonal antibody (~150 kD) was chemically conjugated to a proprietary linker-payload mimic molecule (~1 kD) to make an ADC mimic, and this mimic was used to compare the prototype TFF capsule to state of the art benchmark cassettes. Dimethyl sulfoxide (DMSO) clearance, ADC yield, aggregate level, and flux were analyzed in order to understand the comparability between these two TFF devices. Safety and efficiency of the two devices were also evaluated.

Materials and Methods
Four devices were tested over a two-day period: two Pellicon® 3 0.11 m2 Ultracel® 30 kD nominal molecular weight cut-off cassettes (MilliporeSigma, Bedford, MA) and two TFF capsule 0.11 m2 prototypes also with 30 kD Ultracel® membrane (MilliporeSigma; See Photo 1 | Figure 1). One capsule and one cassette were run the first day on parallel TFF systems and the second capsule and cassette were run in a similar fashion on the second day. The TFF process flow diagram can be found in Figure 1.

Comparability Data
Key device and system characteristics are shown in Table 1. All values are determined to be within the acceptable range except the hold-up for the cassette system on Day 1, which appears to be in error since tubing was retained for Day 2, where system values are much lower. System hold-up volume was determined by the change in the fully retained protein concentration upon addition of the known stock solution.

It is likely there was an error in pulling the diluted concentration sample from the system or in the UV measurement; therefore, the cassette hold-up value for Day 1 was omitted from the average (Table 1).

DMSO was efficiently cleared from the ADC mimic feed solution by both devices (Figure 3). Ten diafiltration wash volumes reduced DMSO concentration by a factor of 10,000 (4 log reduction) with a sieving coefficient (S) of 1, which is ideal. DMSO concentration was further reduced to a total factor of 100,000 (5 log reduction) after 17-20 diafiltration volumes.

The filtrate flow rate was a function of transmembrane pressure (TMP) below 15 psi for the capsule and below 20 psi for the cassette, due to the capsule’s higher permeability (Figure 4). Overall, the fluxes were stable throughout the diafiltration step for both devices. The TMP was set to 15 psi for the capsule; the cassette started at 10 psi, by manual error, and then was corrected to the target 15 psi after 3 diafiltration volumes (Figure 5A). Both formats could obtain a maximum flux of about 90 LMH, L/m2-hr (Figure 5B).

Yields were comparable and acceptably high for the two device formats (Table 2), and the aggregate formation rate for the ADC mimic was also similar (Figure 6).

A summary of the performance comparability of the two devices based on the target parameters studied is shown in Table 3. The capsule prototype demonstrated comparable performance to the cassette when run at the same feed flow rate and TMP.

Applicability Data
The capsule was user-friendly as summarized in Table 4. The capsule was lighter, more mobile, and easier to set-up than the cassette since it did not require a holder (5kg) and had fewer connections to establish the flow paths. Approximately 45 minutes was saved per alpha trial run by the capsule because the sanitization step was not needed. The potential time saved on the GMP manufacturing floor can be many times higher. The capsule fit well within the typical TFF system flow path used for cassettes, and was also safer for the operator, as it did not need to be opened during disassembly to remove the filter: the capsule is self-contained.

Conclusion
An ADC mimic was used to compare the prototype TFF capsule to state-of-the-art benchmark cassettes. Both devices demonstrated similar process performance and clearance of the organic solvent (DMSO). Based on the assessment of the two device formats, the capsule could be a safer, more time-efficient device for ultrafiltration/diafiltration processes of antibody drug conjugates. Follow up studies will be conducted to evaluate scalability of different sizes and types of devices using actual cytotoxic ADC material.


Brooke Czapkowski (1), Jonathan Steen (2), Eric Bortell (1), Vimal Patel (1), Ye Joon Seo (1), Jim Jiang (1), Julius Lagliva (1), Deanna Di Grandi (1), and Mikhail Kozlov (2)

(1) Pfizer, Pearl River, NY; (2) MilliporeSigma, Bedford, MA


March 27, 2017 | Corresponding Authors: Brooke Czapkowski and Jonathan Steen | DOI: 10.14229/jadc.2017.11.04.001

Received: March 27, 2017 | Accepted for Publication: April 10, 2017 | Published online April 12, 2017 |

Last Editorial Review: April 11, 2017

Featured Image: Chemical Glass. Courtesy: © Fotolia. Used with permission.

Creative Commons License
This work is published by InPress Media Group, LLC (Trial of High Efficiency TFF Capsule Prototype for ADC Purification) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Copyright © 2017 InPress Media Group. All rights reserved. Republication or redistribution of InPress Media Group content, including by framing or similar means, is expressly prohibited without the prior written consent of InPress Media Group. InPress Media Group shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon. ADC Review / Journal of Antibody-drug Conjugates is a registered trademarks and trademarks of InPress Media Group around the world.

The post Trial of High Efficiency TFF Capsule Prototype for ADC Purification appeared first on ADC Review.

Environmental Risk Assessment and New Drug Development

$
0
0

1.0 Abstract
In our globalized world, human pharmaceutical residues and traces of other (chemical) down-the-drain contaminants have become an environmental concern. Following the detection of (pharmaceutical) drug residues in drinking and surface waters , regulatory agencies around the world, including the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA), have developed detailed guidance on how pharmaceutical products should be assessed for possible adverse environmental effects.

Hence, an Environmental Risk Assessment or ERA is required as part of the clinical development, regulatory submission and marketing authorization of pharmaceuticals. This is mandatory for drugs both for the treatment of human diseases as well as veterinary use.

Using fate, exposure and effects data, an environmental risk assessment or ERA evaluates the potential risk of (new) medicinal compounds and the environmental impact they cause.

Despite the available guidance from regulatory agencies, regulatory policy is complex, and a number of aspects related to ERA remain unclear because they are not yet well defined. Furthermore, the specific requirements are not always straightforward. Moreover, while some types of chemicals are exempt (e.g., vitamins, electrolytes, peptides, proteins), such exemption may be overruled when a specific mode of action (MOA) involves endocrine disruption and modulation.

In this white paper, which focuses on human pharmaceuticals rather than veterinary pharmaceuticals, the author reviews topics ranging from regulations and environmental chemistry to exposure analysis and environmental toxicology. He also addresses key aspects of an ERA.


2.0 Introduction
The effective functioning of a modern, healthy society increasingly demands developing novel therapeutic agents for the treatment of human and veterinary disease as well as the new and emerging technologies that form the foundation for advancement. A proper understanding of environmental health and safety risks that may have been introduced into the environment as part of developing these new medicines is an important part of this process.

To understand these risks, Environmental Risks Assessments or ERAs are designed to systematically organize, evaluate and understand relevant scientific information. The purpose of such assessment is to ascertain if, and with what likelihood, individuals are directly or indirectly exposed to (novel) medicinal compounds, (bio) pharmaceutical products or active pharmaceutical ingredients in our immediate environment, as well as the consequences of such exposure. The information can then be used to assess if the use of these agents may result in unintended health-related impairment or harm as the result of such exposure, as well as the impact these agents may have on a globalized world. [1]

3.0 Exposure
Exposure may occur if humans come into contact with (novel) medicinal compounds, (bio) pharmaceutical products or active pharmaceutical ingredients. And while therapeutic agents may be intended to cause some measure of harm – for example, chemotherapeutic agents in the treatment of patients with various forms of cancer designed to “kill” malignant cells – unintended environmental exposure may, in turn, cause unintended serious adverse events. In many cases, such exposure may be limited to trace levels of the active pharmaceutical ingredient.

Over the past 30 years, the impact of such exposure, as well as its implications, have become clearer. Because early analytical equipment was not very sensitive, traces of (novel) therapeutic and medicinal compounds, (bio) pharmaceuticals and active pharmaceutical ingredients were not easily detected in the environment until the 1990s. The result was that the impact of these agents in the environment was generally considered nonexistent and unimportant.

However, since the late 1940s, scientists have been aware of the potential that a variety of chemicals are able to mimic endogenous estrogens and androgens. [2][3][4]

The first accounts indicating that hormones were not completely eliminated from municipal sewage, wastewater and surface water were not published until 1965, by scientists at Harvard University, [5][6] and it was not until 1970 that scientists, concerned with wastewater treatment, probed to what extent steroids are biodegradable, because hormones are physiologically active in very small amounts. [7]

However, the first report specifically addressing the discharge of medicinal compounds, pharmaceutical agents or active pharmaceutical ingredients into the environment was published in 1977 by scientists from the University of Kansas. [8]

Despite these and many other early findings, the subject of medicinal compounds such as steroids and other pharmaceutical residues in wastewater did not gain significant attention until the 1990s, when the occurrence of hermaphroditic fish was linked to natural and synthetic steroid hormones in wastewater. [9]

In numerous studies and reports, researchers hypothesized and confirmed that effluent discharge in the aquatic environment, such as municipal sewage, wastewater systems as well as surface waters, contained either a substance or (multiple) substances, including natural and synthetic hormones, that are estrogenic to fish, affecting their reproductive systems. [10]

In time, scientists confirmed that these adverse effects, and implications of endocrine disruption and modulation, were caused by residues of estrogenic human pharmaceuticals. [1]

After discovering hermaphroditic fish in and near water-treatment facilities, scientists identifying the estrogenic compounds that were most likely associated with this occurrence confirmed that substances such as ethynylestradiol, originating from pharmaceutical use, generated a similar effect in caged fish exposed to levels as low as 1 to 10 ng 1−1 and that positive responses may even arise at 0.1 to 0.5 ng 1−1. [9]

Although it was now recognized that the therapeutic agent or active pharmaceutical ingredient itself was biologically active, experts generally believed that there was only a limited environmental impact during manufacturing; and because these therapeutic agents were only manufactured in relatively small amounts, they were not concerned about the potential environmental risk of pharmaceutical residues and trace contaminants. [1]

4.0 Pharmacotherapy
Today, with pharmacotherapy a common part of our daily life, many concerned citizens realize that pharmaceutical residues and trace contaminants may represent an increased environmental risk with potential consequence for human and animal health. [1]

And although the concentrations of these residues rarely exceed the level of parts per billion (ppb), limiting acute toxicity, the emergence of these residues and traces in the environment fundamentally changed the way we look at the (potential) risk of these active pharmaceutical ingredients in the ecosystem. [1]

But regulators have also come to understand that environmental risk assessment developed for non-medicinal chemical containment cannot necessarily be applied to (novel) medicinal compounds, (bio) pharmaceutical products or active pharmaceutical ingredients. They understand that protecting the environment, while at the same time improving human and animal health, requires a better understanding of how to protect the environment (the ecosystem) as well as the active pharmaceutical ingredient in its own regulated environment.

5.0 Value for society
The issue of medicinal compounds, (bio) pharmaceutical agents and active pharmaceutical ingredients in our environment is complex. This complexity is, in part, derived from the medicinal value of these compounds and the general acceptance that patient use – and therefore the excretion of active pharmaceutical ingredients into the environment and, as a result, the potential of harmful effects to the ecosystem and human health – rather than other methods of release, is the primary reason why we find traces of these agents in our environment. [11]

There is no doubt that modern medicines developed by research-based pharmaceutical companies have brought tremendous value. For example, the development of antibiotics generated enormous gains in public health through the prevention and treatment of bacterial infections. In the 20th century, the use of antibiotics aided the unprecedented doubling of the human life span. [12][13]

Before the development of insulin in the late 1920s and early 1930s, people diagnosed with diabetes (type 1) were not expected to survive. In 1922, children with diabetes rarely lived a year after diagnosis. Five percent of adults died within two years, and less than 20% lived more than 10 years. But since insulin became available, the drug has become a daily routine for people with diabetes, creating a real survival benefit and making the difference between life and death. [14]

Pharmaceutical agents have also drastically impacted social life. The introduction of the pill in the early 1960s, for example, affected women’s health, fertility trends, laws and policies, religion, interpersonal relations, family roles, women’s careers, gender relations and premarital sexual practices, offering a host of contraceptive and non-contraceptive health benefits. [15]

It can be said that the emergence of the women’s rights movement of the late 1960s and 1970s is directly related to the availability of the pill and the control over fertility it enabled: It allowed women to make personal choices about life, family and work. [15]

The development of novel targeted anticancer agents, including antibody-drug conjugates or ADCs, have resulted in a new way of treating cancer and hematological malignancies with fewer adverse events, longer survival and better quality of life (QoL).
In the end, the economic impact of pharmaceutical agents, some hailed as true miracles, has been remarkable, contributing to our ability to cure and manage (human) disease and allowing people to live longer, healthier lives.
At the same time, the (clinical) use of (novel) medicinal or (bio) pharmaceutical agents and their underlying active ingredients can also harbor a number of risks for the environment.

6.0 Understanding environmental risk
In the development of novel therapeutic agents, intensive pre-clinical investigations yield a vast amount pharmacological and toxicological data. During the discovery and (early) development of therapeutic agents, researchers are paying close attention to target specificity and pathways to understand how an innovative drug compound may have beneficial efficacy in the treatment of human or veterinary diseases. Because adverse events are undesirable, drug developers often focus on therapeutics with a well-understood mechanism of action (MOA) and low toxicity (often measured in ng/L). [1]

As a result, only a small number of pharmaceuticals will be classified as highly and acutely toxic, requiring new approaches to identify pharmaceutical agents in robust environmental hazard and risk assessments. [16]

7.0 Pharmaceutical risk assessment
While non-medicinal and chemical entities produced in significant commercial quantities require an environmental risk assessment based on a minimum set of hazard data to assess and manage risks to humans and the environment, such an approach does not necessarily apply to (novel) therapeutic agents. One reason is that the health and wellbeing of humans should never be assessed and managed on the basis of risk alone. Regulators generally require drug developers or sponsors to undertake a comprehensive assessment of the potential risks and benefits of a proposed therapeutic agent, which may demonstrate significant risk to the patient. However, these risks are largely offset by the medicinal benefits of such agents.

Regulators around the world require a systematic and transparent assessment of the (potential) of environmental risk in addition to a (novel) medicinal agent’s quality, safety and efficacy, and relevance as part of regulatory decision-making. [17]

8.0 Environmental risk and regulatory requirements in the United States
The legal mandate of protecting the environment in the United States consists of the National Environmental Policy Act of 1969 (NEPA), which requires all federal agencies to assess the environmental impact of their actions and the impact on the environment, and the Federal Food, Drug and Cosmetic Act (FFDCA) of 1938 (amended in 1976).

This legal framework further determines that the regulation of pharmaceuticals in the environment is the responsibility of the United States Environmental Protection Agency or EPA and the United States Food and Drug Administration (FDA), which is required to consider the environmental impact of approving novel therapeutic agents and biologics applications as an integral part of the regulatory process.

The FDA has required environmental risk assessments for (novel) medicinal compounds, (bio) pharmaceutical agents and active pharmaceutical ingredients for veterinary use (since 1980) as well as the treatment of human diseases (since 1998).

As such, the FDA regulations in 21 CFR part 25 identify which Pharmaceutical Environmental Risk Assessment or PERA is required as part of a New Drug Application or NDA, abbreviated application, Investigational New Drug application or IND. [18]

The same regulations (21 CFR 25.30 or 25.31) identify categorical exclusions for a number of products and product categories – including vitamins, electrolytes, peptides, proteins, etc. – that do not require the preparation of an environmental risk assessment or ERA because, as a class, these agents, individually or cumulatively, do not significantly affect the quality of the (human) environment.

In addition, and in contrast to the categorical exclusion, these regulations also identify cases when such an exemption may be overruled as the result of a specific mode of action (MOA) involving endocrine disruption and modulation. [18]

9.0 Required ERA
Under the applicable regulations, NDAs, abbreviated applications and supplements to such applications do not qualify for a categorical exclusion if the FDA’s approval of the application results in an increased use of the active moiety or active pharmaceutical ingredient, as a result of higher dose levels, use of a longer duration, for a different indication than was previously approved, or if the medicinal agent or drug is a new molecular entity and the estimated concentration of the active therapeutic agent at the time of entry into the aquatic environment is expected to be 1 part per billion (ppb) or greater.

Furthermore, a categorical exclusion is not applicable when approval of an application results in a significantly altered concentration or distribution of a (novel) therapeutic agent, the active pharmaceutical ingredient, its metabolites or degradation products in the environment.

Regulations also refer to so-called extraordinary circumstances (stated in 21 CFR 25.21 and 40 CFR 1508.4) where a categorical exclusion does not exist. This may be the case when a specific product significantly affects the quality of the (human) environment and the available data establishes that there is a potential for serious harm. Such environmental harm may go beyond toxicity and may include lasting effects on ecological community dynamics. Hence, it includes adverse effects on species included in the United States Endangered Species Act (ESA) as well as other federal laws and international treaties to which the United States is a party. In these cases, considered extraordinary circumstances, an environmental risk assessment is required unless there are specific exemptions relating to the active pharmaceutical ingredient.

10.0 Naturally Occurring Substances
Based on the current regulations, a drug or biologic may be considered to be a “naturally occurring” substance if it comes from a natural source or is the result of a biological process. This applies even if such a product is chemically synthesized. The regulators consider the form in which an active ingredient or active pharmaceutical agent exists in the environment to determine if a medicinal compound or biologic is a naturally occurring substance. Biological and (bio) pharmaceutical compounds are also evaluated in this way.

According to the Guidance for Industry, a protein or DNA containing naturally occurring amino acids or nucleosides with a sequence different from that of a naturally occurring substance will, after consideration of metabolism, generally qualify as a naturally occurring substance. The same principle applies to synthetic peptides and oligonucleotides as well as living and dead cells and organisms. [18]

11.0 Preparing an Environmental Risk Assessment
If an environmental risk assessment is required, the FDA requires drug developers and/or sponsors to focus on characterizing the fate and effects of the active pharmaceutical ingredient in the environment as laid out in the Guidance for Industry, Environmental Assessment of Human Drugs and Biologics Applications (1998). [18]

This is generally the case if the estimated concentration of the active pharmaceutical ingredient being considered reaches, at the point of entry into the aquatic environment, a concentration ≥1 PPB; significantly alters the concentration or distribution of a naturally occurring substance, its metabolites or degradation products in the environment; or, based on available data, it can be expected that an increase of the level of exposure may, potentially, lead to serious harm to the environment. [18]

To guarantee that satisfactory information is available, the 1998 Guidance for Industry lays out a tiered approach for toxicity testing to be included in an environmental risk assessment. [Figure I] [18]

Furthermore, if potential adverse environmental impacts are identified, the environmental risk assessment should, in accordance with 21 CFR 25.40(a), include a discussion of reasonable alternatives designed to offer less environmental risk or mitigating actions that lower the environmental risk.

Figure 1: Tiered Approach to Fate and Effect Testing (USA) [18]
12.0 A Tiered Approach
The fate and effects testing is based on a tiered approach:

12.1 Tier 1
This step does not require acute ecotoxicity testing to be performed if the EC50 or LC50 divided by the maximum expected environmental concentration (MEEC) is ≥1,000, unless sublethal effects are observed at the MEEC. If sublethal effects are observed, chronic testing as indicated in tier 3 is required. [18]

12.2 Tier 2
In this step, acute ecotoxicity testing is required to be performed on a minimum of aquatic and/or terrestrial organisms. In this phase, acute ecotoxicity testing includes a fish acute toxicity test, an aquatic invertebrate acute toxicity test and analgal species bioassay.

Similar to tier 1, tier 2 does not require acute ecotoxicity testing to be performed if the EC50 or LC50 for the most sensitive organisms included in the base test, divided by the maximum expected environmental concentration (MEEC) is, in this tier, ≥100, unless sublethal effects are observed at the MEEC. However, as in the case of tier 1, if sublethal effects are observed, chronic testing as indicated in tier 3 is required. [18]

12.3 Tier 3
This tier requires chronic toxicity testing if the active pharmaceutical ingredient has the potential to bioaccumulate or bioconcentrate, or if such testing is required based on tier 1 or tier 2 test results. [18]

13.0 Bioaccumulation and Bioconcentration
Bioaccumulation and bioconcentration are complex and dynamic processes depending on the availability, persistence and physical/chemical properties of an active pharmaceutical ingredient in the environment. [18]

Bioaccumulation and bioconcentration refer to an increase in the concentration of the active pharmaceutical ingredient in a biological organism over time, compared with the concentration in the environment. In general, compounds accumulate in living organisms any time they are taken up and stored faster than they are metabolized or excreted. The understanding of this dynamic process is of key importance in protecting human beings and other organisms from the adverse effects of exposure to a (novel) medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient, and it is a critical consideration in the regulatory process. [21]

According to the definition in the Guidance for Industry, active pharmaceutical ingredients are generally not very lipophilic and are, in comparison to industrial chemicals, produced in relatively low quantities. Furthermore, the majority of active pharmaceutical ingredients generally metabolize to Slow Reacting Substances or SRSs that are more polar, less toxic and less pharmaceutically active than the original parent compound. This suggests a low potential for bioaccumulation or bioconcentration. [18]

Following a proper understanding of this process, tier 3 chronic toxicity testing is required if an active pharmaceutical ingredient has the potential to bioaccumulate or bioconcentrate. A primary indicator is the octanol/water partition coefficient (Kow). If, for example, the logarithm of the octanol/water partition coefficient (Kow) is high, the active pharmaceutical ingredient tends to be lipophilic. If the coefficient is ≥3.5 under relevant environmental conditions, such as a pH of 7, chronic toxicity testing is required.

Tier 3 does not require further testing if the EC50 or LC50 divided by the maximum expected environmental concentration (MEEC) is ≥10, unless sublethal effects are observed at the MEEC.

In accordance with the Guidance for Industry, a drug developer or sponsor should include a summary discussion of the environmental fate and effect of the active pharmaceutical ingredient in an environmental risk assessment. The environmental risk assessment should also include a discussion of the affected aquatic, terrestrial or atmospheric environments. [18]

14.0 Special Consideration: Environmental Impact Statement
Following the filing of an environmental risk assessment for gene therapies, vectored vaccines and related recombinant viral or microbial products, the FDA will evaluate the information and, based on the submitted data, determine whether the proposed (novel) medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient may significantly affect the environment and if an Environmental Impact Statement (EIS) is required. According to 21 CFR 25.52, if an EIS is required, it will be available at the time the product is approved. Furthermore, if required, an EIS includes, according to 40 CFR 1502.1, a fair discussion of the environmental impact as well as information to help decision-makers and the public find reasonable alternatives that help in avoiding or minimizing adverse impacts or enhance environmental quality. [19]

However, if the FDA determines that an EIS is not required, a Finding of No Significant Impact (FONSI) will, according to 21 CFR 25.41(a), explain why this is not required. This statement will include either the environmental risk assessment or a summary as well as reference to underlying documents supporting the decision. [19]

15.0 European requirements
In Europe, environmental risk assessments were, in accordance EU Directive 92/18/EEC and the corresponding note for guidance issued by the European Medicines Agency (EMA), first required for (novel) medicinal agents for veterinary use in 1998. The requirement for an environmental risk assessment for (novel) medicinal agents, (bio) pharmaceuticals and active pharmaceutical ingredients for the treatment of human disease was first described in 2001 in Directive 2001/83/EC.

Subsequent to an initial guiding document published in January 2005, the European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP) issued its final guidance for the assessment of environmental risk of medicinal products for human use in 2006. [20]

After the discovery of pharmaceutical residues and trace contaminants in the environment, regulators in the European Union require that an application for marketing authorization of a (novel) medicinal or (bio) pharmaceutical agent is accompanied by an environmental risk assessment.

This requirement is spelled out in the revised European Framework Directive relating to medicinal products for human use. It applies for new registrations as well as repeat registrations for the same medicinal agent if the approval of such an extension or application leads to the risk of increased environmental exposure.

In Europe, the objective of the environmental risk assessment is to evaluate, in a step-wise, phased procedure, and as part of the Centralized Procedure by the European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP), the potential environmental risk of (novel) medicinal compounds, (bio) pharmaceutical agents and/or active pharmaceutical ingredients. Such an assessment will be executed on a case-by-case basis.

16.0 Phase I
In this process, Phase I estimates the exposure of the environment to the drug substance and is only focused on the active pharmaceutical ingredient or drug substance/active moiety, irrespective of the intended route of administration, pharmaceutical form, metabolism and excretion.
This phase excludes amino acids, proteins, peptides, carbohydrates, lipids, electrolytes, vaccines and herbal medicines, because regulators believe that these biologically derived products are unlikely to present a significant risk to the environment. [21]
The exemption for these biologically derived biopharmaceuticals is generally interpreted as an exemption for all biopharmaceutical agents manufactured via live organisms and that have an active ingredient that is biological in nature. [21]

Yet, not all biologically derived biopharmaceuticals are (easily) biodegradable, and scientists have detected modified natural products, including plasmids, in the environment. Furthermore, some protein structures, including prions, are very environmentally stable and resistant to degradation, allowing them to persist in the environment. [22] Hence, this approach requires future scientific justification.
In Phase I, following the directions included in the European Chemicals Bureau (2003) Technical Guidance Document, an active pharmaceutical ingredient or drug substance/active moiety with a logKow >4.5 requires further screening for persistence, bioaccumulation and toxicity, or a PBT assessment.

For example, based on the OSPAR Convention and REACH Technical Guidance, highly lipophilic agents and endocrine disruptors are referred to PBT assessments.
Phase I also includes the calculation of the Predicted Environmental Concentration or PEC of active pharmaceutical ingredients, which, in this phase, is restricted to the aquatic environment, and a so-called “action limit” requiring additional screening.
The “action limit” threshold for the PEC in surface water (PECsurface water), for example, is calculated by using the daily dose of an active pharmaceutical ingredient, the default values for wastewater production per capita, and the estimated sale and/or distribution of the active pharmaceutical ingredient if there is evidence of metabolism and no biodegradation or retention following sewage treatment is observed.

17.0 Phase II
Phase II, divided into two parts, tier A and tier B, assesses the fate and effects of novel medicinal compounds, (bio) pharmaceutical agents or active pharmaceutical ingredients in the environment.

Following the assessment of the PEC/PNEC ratio based on relevant environmental fate and effects data (Phase IIA), further testing may be needed to refine PEC and PNEC values in phase II tier B. A PEC/PNEC ratio of This process helps regulators to evaluate potential adverse effects independently of the benefit of the (novel) medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient, or the direct or indirect impact on the environment.


Stage in regulatory evaluation Stage in risk assessment Objective Method TEST / DATA REQUIREMENT
Phase I Pre-screening Estimation of exposure Action limit Consumption data, logKow
Phase II Tier A Screening Initial prediction of risk Risk assessment Base set aquatic toxicology and fate
Phase II Tier B Extended Substance and compartment-specific refinement and risk assessment Risk assessment Extended data set on emission, fate and effects

 

Table 1: The Phased Approach in Environmental Risk Assessment in Europe


18.0 Outcome of fate and effects analysis
In all cases, the medicinal benefit for patients has relative precedence over environmental risks. This means that even in the case of an unacceptable (residual) environmental risk caused by a novel medicinal compound, pharmaceutical agent or active pharmaceutical ingredient, after third-tier considerations, prohibition of a new active pharmaceutical ingredient is not taken into consideration.

If European regulators determine that the possibility of environmental risk cannot be excluded, mitigating, precautionary and safety measures may require the development of specific labeling designed to address the potential risk, as well as adding adequate information in the Summary of Product Characteristics (SPC), Package Leaflet (PL) for patient use, product storage and disposal. The information on the label, SPC and PL should also include information on how to minimize the discharge of the product into the environment and how to deal with disposal of unused product, such as in the case of shelf-life expiration.

In extreme cases, a recommendation may be included for restricted in-hospital or in-surgery administration under supervision only, a recommendation for environmental analytical monitoring, or a requirement for ecological field studies. [20] [23]

19.0 Combined effects
Often overlooked by regulators is the fact that the regulatory frameworks such as the European REACH Regulation, the Water Framework Directive (WFD) and the Marine Strategy Framework Directive (MSFD) mainly focus on toxicity assessment of individual chemicals or active pharmaceutical ingredients.

This poses a problem for the proper execution of environmental risk assessments and regulation because the effect of contaminant mixtures with multiple chemical agents and active pharmaceutical ingredients, irregardless of their source, is a matter of growing, and recognized, scientific concern. [24]

To solve this problem, scientists are working on experimental, modeling and predictive environmental risk assessment approaches using combined effect data, the involvement of biomarkers to characterize Mode of Action, and toxicity pathways and efforts to identify relevant risk scenarios related to combined effects of pharmaceutical residues, trace contaminants as well as non-medicinal (industrial) chemicals. [24]

20.0 International harmonization
Created in the 1990s, the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) was set up as an agreement between the European Union, the United States and Japan to harmonize different regional and national requirements for registering pharmaceutical agents in order to reduce the need to duplicate testing during the research and development phase of (novel) medicinal compounds, (bio) pharmaceutical agents and active pharmaceutical ingredients. However, to date, and partly as a result of the overlying differences in regulations and directives, environmental risk assessments have, so far, not been included in the harmonization procedures. [25]

In contrast, the International Cooperation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products, or VICH, similar to the ICH a trilateral set up in 1996 between the European Union, the Unites States and Japan, does include the assessment of ecotoxicity and the evaluation of environmental impact of veterinary medicinal products.

The VICH guideline, intended to provide a common basis for an Environmental Impact Assessment or EIA, offers guidance for the use of a single set of environmental fate and toxicity data and is designed to guide scientists to secure the type of information needed to protect the environment. The guideline, published in 2004 and recommended for implementation in 2005, was developed as a scientifically objective tool to help scientists and regulators extract the maximum amount of information from studies to achieve an understanding of the potential (risk) of specific Veterinary Medicinal Products to the environment. [26]

21.0 Impact of Environmental Risk Assessment
Although an environmental risk assessment is part of the regulatory approval and marketing authorization process in both the United States and Europe, the actual impact can be different.

In Europe, an adverse environmental risk assessment for (novel) medical compounds, (bio) pharmaceutical agents or active pharmaceutical ingredients for human use does not impact or influence the marketing approval application. EU Directive 2004/27/EC/Paragraph 18 stipulates that the environmental impact should be assessed and, on a case-by-case basis, specific arrangements to limit it should be envisaged. In any event, the impact should not lead to refusal of a marketing authorization.

However, a parallel directive pertaining to veterinary medicine, as laid out in EU Directive 2009/9/EC, stipulates that, in the case of veterinary medicine, an environmental impact assessment should be conducted to assess the potential harmful effects and the kind of harm the use of such a product may cause to the environment, as well as to identify any precautionary measures that may be necessary to reduce such risk.

Furthermore, the directive requires that, in the case of live vaccine strains which may be zoonotic, the risk to humans also needs to be assessed. In the case of veterinary medicine, an environmental impact assessment is part of the overall risk-benefit assessment, and, in the case of a negative result, may potentially lead to a refusal to approve the medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient.

In the United States, the FDA has eliminated environmental assessment requirements for certain types of veterinary drugs when they are not expected to significantly affect the environment. However, a negative assessment, based on unacceptable risk to “food” or “non-food” animals, can result in a refusal of a New Animal Drug Application (NADA) or a Supplemental New Animal Drug Application (SNADA). [26]

22.0 Conclusion
The central questions in the development of (novel) medicinal compounds, (bio) pharmaceutical products or active pharmaceutical ingredients for the treatment of human and veterinary disease is whether a novel agent will have an effect on the environment.

Regulators around the world, including in the United States and Europe, follow different assessment methodologies to ascertain these risks. However, all regulators use fate, exposure and effects data to help them understand if a (novel) medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient harbors a potential environmental risk, causing potential harmful effects on the ecosystem, and how this impacts human and veterinary health.

In all cases, environmental risk assessments are carried out based on scientifically sound premises, relying on established, accepted and universally known facts.

Overall, environmental risk assessments are useful analytical tools, providing critical information contributing to public health, as well as key instruments in guiding environmental policy decision-making.

As such, they play a key role in building a better, healthier world.


May 3, 2017 | Corresponding Authors: Duane Huggett, Ph.D | DOI: 10.14229/jadc.2017.29.04.003

Received: February 24, 2017 | Accepted for Publication: April 28, 2017 | Published online May 3, 2017 |

Last Editorial Review: April 28, 2017

Featured Image: Medical research laboratory with scientist using pipette. Courtesy: © Fotolia. Used with permission.

Creative Commons License

This work is published by InPress Media Group, LLC (Environmental Risk Assessment and New Drug Development) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Copyright © 2017 InPress Media Group. All rights reserved. Republication or redistribution of InPress Media Group content, including by framing or similar means, is expressly prohibited without the prior written consent of InPress Media Group. InPress Media Group shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon. ADC Review / Journal of Antibody-drug Conjugates is a registered trademarks and trademarks of InPress Media Group around the world.

The post Environmental Risk Assessment and New Drug Development appeared first on ADC Review.


Challenges in Environmental Testing of Multi-component Substances

$
0
0

1.0 Abstract
An Environmental Risk Assessment or ERA needs to clearly identify hazard and exposure to evaluate risk. Having good analytical chemistry methods is essential in measuring environmental concentrations in field samples as well as in hazard based hazard-based testing to determine the effects of a specific chemical. These methods are also needed to determine physical chemical properties used in models to predict environmental concentrations of chemicals.

Developing methods can be challenging enough for one test chemical, however, multi-component or multi-constituent substance, substances present an even greater significant challenge. According to the European Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) definition, multi-constituent substances are defined by their quantitative composition in which more than one main constituent is present in a concentration ≥10% (w/w) and <80% (w/w). The definition also states that components or constituents with a concentration <10% (w/w) should be identified as impurities, and it requires that impurities ≥1% (w/w) be specified by at least one classifier, such as name, CAS Registry number, etc. In addition, the U.S. Toxic Substances Control Act or TSCA regulates the introduction of new or already existing chemicals.

The successful analysis of multi-component or multi-constituent substances requires a combination of in-depth knowledge of the chemical process used to manufacture multi-component or multi-constituent substances. As a result, analytical techniques used for multi-constituent substance characterization must have the capacity to distinguish between the various components present and generate direct evidence of their chemical structure and concentration. In order to be successful, a number of questions need to be answered before the start of the analytical process: What toxicity and physical/chemical data are available? What models are available? What are the matrices of interest (water, soil, sediment, air, animal or plant tissue)? What range of concentrations do we need to achieve in matrices of interest? What are the limiting factors in achieving good recoveries in matrices on interest? Stability and homogeneity of the test substance is key in many of the matrices (e.g., diet, water, sediment).

In order to meet the analytical requirements of regulatory agencies, scientists depend on more sensitive and rapid analytical techniques than the traditional technologies of gas chromatography (GC), gas chromatography mass spectrometry (GC/MS) and high-performance liquid chromatography (HPLC). These more sensitive analytical technologies may include liquid chromatography coupled to mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS), allowing quantitative analysis according to regulatory requirements as well as aiding in compound screening, identification and confirmation.

In this article, the authors discuss some of the key aspects involved in the analytical methods involving multi-component or multi-constituent substances, as well as a number of applicable regulatory requirements.


2.0 Introduction
The number of multi-component or multi-constituent substances found in the environment is rapidly increasing. At the same time, our understanding of both regulated and unregulated chemicals, often with highly diverse structures and broad biological activities, and the (medicinal) (bio)pharmaceuticals and personal-care products involved, is growing.

The increased occurrence as well as the added complexity of multi-component substances presents a major challenge for environmental analytical chemists. This challenge can be met with extensive analytical methods and comprehensive characterization using a variety of techniques and methods to confirm the occurrence of a particular chemical in the environment.

Definition: What are multi-component or multi-constituent substances
Multi-component or multi-constituent substances are, according to the European Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) regulatory framework, defined by their quantitative composition in which more than one main constituent is present in a concentration ≥10% and <80% (w/w). The definition also states that components or constituents with a concentration <10% (w/w) should be identified as impurities, and it requires that impurities ≥1% (w/w) be specified by at least one classifier, such as name, CAS Registry number, etc. [1]

In addition to REACH, the manufacturers of Fertilizers and Related Materials (FARM) have joined forces for their REACH compliance activities by launching the FARM REACH consortium in December 2008. They define multi-constituent substances as preparations. In their view, multi-constituent substances, irrespective of the manufacturing method of a final chemical product used, always contain a combination of a few (simple) ionic components. [1]

In the United States, the U.S. Toxic Substances Control Act of 1976 or TSCA regulates the types of chemicals that can be used in manufacturing, as well as the introduction of new chemicals. The act, which specifically mandates the United States Environmental Protection Agency (U.S. EPA) to protect the public from “unreasonable risk of injury to health or the environment” by regulating the manufacture and sale of chemicals, is essential given the large number of chemicals and multiple -consitituent constituent substances subject to control, each with myriads ofmyriad multiple and synergistic (toxic) effects and multiple pathways of exposure.


3.0 Significant environmental risk
A key question, asked by analytical scientists and regulators alike, is whether the combined low concentrations of chemicals, (medicinal) (bio)pharmaceuticals and personal-care products found in multi-component substances in the aquatic environment have a significant, recognizable effect on ecologic function. A second, equality equally important, question asks if multi-component substances pose a long-term risk to human and veterinary health. [2]

These questions are complicated by the fact that while the concentrations of these chemicals, (medicinal) (bio)pharmaceuticals and personal-care products may be relatively low – measured on the sub–part per billion or sub-nanomolar level – a number of these substances may share a specific mode of action (MOA), which, in turn, could lead to possible significant environmental effects through additive exposure. The concern is that these substances, which may enter our environment from a variety of routes, may escape detection if they are present at concentration concentrations below detection, while still having hazardous long-term cumulative effects. This may especially be the case with residues of specific pharmaceuticals or personal carepersonal-care products found in the aquatic environment. [2]

The reason for their concern is that a number of these pharmaceuticals are specifically designed to modulate endocrine and immune systems and cellular signal transduction. They have the potential to function as endocrine disruptors. Other agents found in the aquatic environment, including antibiotics for the treatment of human and veterinary disease, may contribute to the potential of resistance to human pathogens. [3]

While anti-cancer drugs and antibiotics are generally administered to “cure” disease, other pharmaceuticals are largely designed to manage or control symptoms of chronic disease, leading to long-term use. Analytical scientists and regulators are concerned about the (in many cases) still-unknown effects of unintentional and long-term exposure to these agents in a healthy population. [4]

Complementing this concern is the fact that select multi-component substances “surviving” various steps of metabolism and other degradative or sequestering actions may create an environmental risk. In some cases, these products may even be more bioactive than the original compound, adding to the potential risk. [2]


4.0 Regulated and unregulated chemicals
Analytical scientists have, for many decades, centered their research on regulated chemicals included in various legislation in the United States and Europe. With the availability of more sensitive analytical methods such as liquid chromatography coupled to mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS), which allow the detection of chemicals used in everyday life, such as surfactants and surfactant residues, pharmaceuticals and personal-care products, and gasoline additives, as well as determining their biological effects, scientists now are able to analyze unregulated contaminants that either went undetected before or were not considered a risk. A number of the chemicals in high-risk multi-component substances, including detergent metabolites, steroids, and prescription and non-prescription drugs, are among the compounds most frequently found in high concentrations in the aquatic environment. [4]


5.0 Human health and regulations
In addition, due to the potential implications of these compounds on human and veterinary health, environmental analysis, as part of the regulatory approval process related to the manufacturing of novel (bio) pharmaceuticals, usually includes rigorous quality assurance and quality control (QA/QC) metrics designed to confirm the reliability of the analytical data.

Overall, the regulatory expectations to better understand product impurities and degradants in biopharmaceutical products continue to increase, making environmental risk assessment infinitely more complex, especially if it involves multi-component substances.


6.0 Why so complex?
Analyzing multi-component or multi-constituent substances is complex because of the difference between perceived facts and reality. For example, in analyzing multi-component or multi-constituent substances, it may be assumed that the materials or substances looked for are of a well-known toxicology and physiology, with well-known chemical and biological properties and known concentrations. In reality, specific data may be missing, chemical properties may be unknown or poorly understood, and finding the range of the concentration may be challenging. At the same time, questions may arise about the kind of mixture to be tested (what are we testing?), the formulation (do co-solvents alter the formulation?), insoluble materials (how low do we need to go?), polymers (differences in chain lengths) and testing near solubility (stock volumes).

Hence, successful analysis of multi-component substances requires in-depth knowledge of the chemical process used to manufacture multi-component or multi-constituent substances.

Characterizing multi-constituent substances is also complex because the analytical methods used must have the capacity to distinguish among all the substances present and must be able to generate direct evidence of their chemical structure and concentration.

According to REACH, this includes impurities down to a level of ≤1% (w/w) of the substance impacting the overall hazard classification. Because REACH defines multi-constituent substances as the product of a chemical reaction, comprehensive knowledge of the actual chemical process used to prepare the multi-constituent substance is essential when selecting analytical techniques that are meaningful for each substance, as well as when deciphering the test results. [1]

In the development of these meaningful analytical procedures, different analytical technologies designed to provide the means to acquire structural and compositional data, can be applied. This choice largely depends on the chemical nature and the complexity of the multi-constituent substance, but it should include either simultaneous separation-analysis techniques or an analysis of the reaction mass without physical separation of each of the constituents. [1]


7.0 Analytical methods
In their Guidance for Identification and Naming of Substances under REACH, the European Chemicals Agency (ECHA) offers limited details concerning spectroscopic and chromatographic methods that can be used to characterize multi-constituent substances. Among the methods they list as techniques that can be used to confirm the composition of multi-constituent substances are mass spectroscopy spectrometry (MS); gas chromatography (GC); gas chromatography–mass spectrometryspectroscopy (GC-MS); high-performance liquid chromatography (HPLC); ultra-performance liquid chromatography (UPLC), a relatively new technique offering new possibilities in liquid chromatography; and liquid chromatography–tandem mass spectrometry, or LC–MS/MS. [1]

Overall, the Guidance explains that methods offering simultaneous separation and analysis have the potential to significantly contribute to the characterization process. [1]


8.0 Gas chromatography
Gas chromatography is a technology often used in detecting volatile organic compounds (VOCs). This chromatography technique, extensively used in the analysis of pharmaceutical products, allows the analysis of impurities in pharmaceuticals as well as the identification of residual solvents listed by the International Conference of Harmonisation (ICH), making accurate quantitative determination of complex mixtures possible. This includes traces of multi-component or multi-constituent substances down to parts per trillion.


9.0 Gas chromatography–mass spectrometry
Gas chromatography is generally a reliable and effective method for separating compounds into their various components. However, it may not always be used for reliable identification of specific substances in the analysis of multi-component or multi-constituent substances.

In these cases, gas chromatography–mass spectrometry (GC-MS) may be used for conclusive proof of identity. By heating a mixture to separate the elements, GC-MS separates the chemical elements of a certain (unknown) compound to identify its molecular-level components. After vaporizing this mixture, the effluent of gas chromatography is fed into a mass spectrometer, where the chemical elements can be separated. This, in turn, leads to the identification of the components through the mass of the analyte.

GC-MS is commonly used for confirmation testing of substances in pharmaceutical drug testing, quality control and environmental assessment.


10.0 High-performance liquid chromatography
One of the primary analytical tools to assess environmental occurrence of multi-component or multi-constituent substances is high-performance liquid chromatography or HPLC. High-performance liquid chromatography is an advanced form of liquid chromatography used to separate, identify and quantify the components in complex mixtures in both chemical and biological environments. This analytical technique is commonly used to determine the molecular species present in a specific multi-component sample.

The underlying principles guiding HPLC are based on the Van Deemter equation, an empirical formula describing the relationship between linear velocity (flow rate) and plate height (height equivalent of theoretical plate [HETP] or column efficiency). According to the Van Deemter equation, a decrease in particle size not only allows significant gain in efficiency, but this efficiency does not diminish at increased flow rates or linear velocities. [5]

Although HPLC helps analytical chemists answer many questions, it lacks, as a result of the proprietary nature of column packing, long-term reproducibility.


11.0 Ultra-performance liquid chromatography
Ultra-performance liquid chromatography (UPLC; Waters) is based on the same principles as HPLC and used in similar applications, focusing on small-molecule analysis. The use of smaller columns that are tightly packed with smaller particles (up to sub 2-m) increases speed, resolution and sensitivity. As a result, UPLC significantly improves chromatographic separations. But this may also present unique challenges. UPLC systems and columns require higher levels of care and attention compared with traditional HPLC. To gain these benefits requires adding fittings and pumps designed to support high system back-pressure. [5][6][7][10]

Depending on the actual system used, system back-pressures used in UPLC may reach values of 100 MPa. In contrast, in HPLC the maximum is between 35 and 45 MPa. This means, for example, that an analysis performed using UPLC can withstand back-pressures of about 90 MPa, which is not possible when using conventional HPLC. [10]

Although UPLC is generally used for small-molecule analysis, the technique is also used for the analysis of proteins. While protein chromatography has generally suffered from problems related to carryover, peak splitting, peak broadening and poor peak shape, using smaller particles at a higher pressure and flow rate – as in the case of UPLC – has remedied many of these problems. Furthermore, when combined with mass spectrometry, UPLC becomes a powerful tool used to separate, identify, characterize and quantify (intact) proteins. [7]


12.0 Liquid chromatography–mass spectrometry
One of the first applications of liquid chromatography–mass spectrometry, also known as LC-MS, was the detection of specific pharmaceuticals and their metabolites within biological fluids in pharmacokinetic studies. While this is still a major application today, LC-MS has also become a powerful and invaluable technique adapted for the detection and trace analysis of polar compounds in aqueous samples from the environment. [8]

Used for many applications, LC-MS combines the physical separation capabilities of high-performance liquid chromatography (HPLC) with the mass analysis capabilities of mass spectrometry, offering a range of advantages due to its high sensitivity and mass selectivity.

Liquid chromatography coupled to mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS) has become a technique of choice for the analysis of high-risk chemicals and (bio) pharmaceuticals found in multi-component substances. This includes endocrine disruptors, causative agents of bacterial resistance (as a result of antibiotic use) and alkylphenolic surfactants.

LC-MS has many specific benefits. Because it enables the identification and quantification of substances without derivatization, this technique is faster, more convenient and more sensitive compared with other methods. In some cases, LC-MS has been used to analyze multi-component or multi-constituent substances that could not be determined before by using different technologies and methods. Furthermore, because of superior sensitivity, selectivity, flexibility and a wide range of metabolite detection, LC-MS has been a promising platform in the analysis of low-abundant metabolites. Overall, the use of LC-MS results in lower detection limits.

The advancement of LC-MS has largely been made possible by changed interface designs. Over the last decade, they have become more sophisticated and efficient. Among the most widely used interfaces for LC-MS analysis of steroids, drugs and surfactants in the aquatic environment are electrospray ionization or ESI (for analysis of polar compounds) and atmospheric pressure chemical ionization or APCI (for the analysis of medium- and low-polarity substances). [9]

However, although LC–MS profiling has become more advanced, it may not always be sensitive enough to detect and characterize metabolites at trace levels. Hence, liquid chromatography–tandem mass spectrometry, or LC–MS/MS, has been developed.


13.0 Liquid chromatography–tandem mass spectrometry
Liquid chromatography–tandem mass spectrometry or LC-MS/MS is widely used for highly selective and sensitive bioanalysis of small molecules and offers higher sensitivity and selectivity in the trace analysis of multi-component or multi-constituent substances. In contrast to conventional photodiode array detection (PDA), analytes do not have to be fully resolved to be identified and quantitated. Furthermore, chemical derivatization is not required nor needed, as, for example, in the case of gas chromatography-mass spectrometry (GC-MS).

LC-MS/MS has evolved into a vital, strategic and powerful qualitative and quantitative analytical technique with a wide range of clinical applications, including therapeutic drug monitoring (TDM), toxicology, microbiology, the emerging field of proteomics and other applications.

One of the benefits of LC-MS/MS is that it allows analytical chemists to multiplex, helping them to identify and quantify multiple analytes simultaneously, reducing cost per test. Cost savings are further realized by increased throughput via simplified or minimal sample preparation for a variety of applications, including dilute-and-shoot or protein crash, compared with more time-consuming and expensive sample preparation methods like solid phasesolid-phase extraction (SPE) or derivatization.


14.0 A different strategy
A different analytical strategy is the analysis of the reaction mass without physical separation of each of the constituent substances. This characterization process may be adopted when physical separation using standard technologies to unequivocally confirm the composition of the multi-component or multi-constituent substance may be difficult or impossible.

This alternative strategy uses light in the infrared (IR) spectrum, which interacts with the bonds in molecules and resonates at particular frequencies. And while the “light” absorption signatures of some of the molecules of interest can be quite weak, if relevant signals can be sufficiently detected and identified, this non-separation approach may involve the use of so-called “spectral fingerprints.” [1] However, when comparing these spectral fingerprints with reference data, special care needs to be given to the fact that the data used may have been obtained using a number of different instruments, conditions or methods, limiting reproducibility. [1][11]


15.0 Matrix effect
In the analysis of multi-component or multi-constituent substances, the matrix effect, historically associated with bioanalytical methods, refers to the specific effects caused by all other components in a sample except for the target compound to be quantified. These effects may be caused by either endogenous or exogenous composites in the sample.

In the analysis of very complex matrices, even when using selected reaction monitoring or SRM detection, false negative results (as a result of matrix ionization suppression effects) and false positive results (due to insufficient selectivity) can occur. [12]

The incidence of matrix effects in LC-MS/MS methods has led to an increased understanding of the factors contributing to the occurrence of these effects and how to handle them. Improvement in the instrumentation and analytical methodology, including modified ionization, ionization switching and extraction modification, has improved the reproducibility and robustness of LC-MS/MS. [4][13]


16.0 Increased selectivity
Due to its inherent selectivity, sensitivity, flexibility and multi-component capability, the application of LC-MS/MS has rapidly expanded in recent years. Using LC-MS/MS, analytical chemists can solve challenging clinical and biomedical research problems more thoroughly and efficiently than was previously possible. Novel technology also offers analytical chemists reproducibility, selectivity and sensitivity often unattainable with immunoassays.


17.0 Sample preparation
However, with all the advanced analytical methods and techniques available, there remains one crucial aspect to be considered: sample preparation. LC-MS and LC-MS/MS analysis of multi-component or multi-constituent substances requires sensitive and robust assays. Analysis often involved involves very complex samples, requiring expert preparation protocols designed to remove unwanted components as well as selectively extract components of interest. Hence, before selecting a specific testing method, analytical chemists need to answer a number of key questions, including:

  • How high do we need to test?
  • What are the limiting factors in achieving nominal concentrations?
  • Is it necessary for test concentrations to be between 80% and 120% of nominal?
  • What toxicity and physical-chemical data are available?
  • What are the test concentrations in aquatic testing? and
  • What models are available?

Some answers may be found as recommendations and (regulatory) guidelines. For example, the guideline recommendations for the selection of test concentrations for aquatic testing limit the concentration of pesticides to 100 mg/L and industrial chemicals to 1,000 mg/L. Likewise, in the development of aquatic tests, a number of important questions related to functional solubility need to be considered, including stability vs. degradation (hydrolysis and photolysis) and absorption vs. volatility. With multi-component substances in aquatic tests it may be necessary to run Water Accommodated Fraction (WAF) trials, followed by toxicity tests. One could then develop methods to identify the key analytical components in the WAF to determine which components are likely causing the toxicity. If there are no effects, than then additional testing would not be warranted.


18.0 Conclusion
The use of advanced LC-MS and LC-MS/MS technologies for the environmental assessment of multi-component substances has allowed analytical chemists to define a large number of compounds, especially polar compounds, that previously were either difficult or even impossible to analyze. [3]

Moreover, the introduction of novel interfaces and triple quadrupole analyzers has improved the sensitivity and selectivity of detection. Consequently, the analysis of steroids, many pharmaceuticals and alkylphenolic surfactants in the environment is possible at the ng/L and ng/g level, and even at the pg/L and pg/g level. [8]

Although enhanced selectivity and sensitivity and rapid, generic gradients have made LC–MS the predominant technology for both quantitative and qualitative analyses, the most important value and application of current LC-MS techniques is the determination of known target compounds, because the capacity of these techniques for screening and identification of unknowns is relatively low. [15]

Despite the high selectivity of LC-MS–based methodologies, and in particular of LC-MS/MS, false negative findings can still occur due to the often high complexity of multi-component substances. To solve this problem, rigorous confirmation and identification criteria in terms of retention time, base peak and diagnostic ions, and relative abundances remain crucial. [8] [14]

In routine analyses it is important to consider speed, sensitivity and resolution. However, the costs associated with the analysis and the associated column maintenance also should also be considered. This, in turn, involves choosing the appropriate mobile phases, careful system washing, and application of adequate flow rates with regard to the column and system properties.

Finally, a proper approach to sample pretreatment remains an indispensable part of the analytical workflow. In recent decades, important progress has been made with regard to the preparation of samples, including compensation for matrix effects. This is important when considering the nature of matrix interference in LC-MS analysis. [6]

The development of new technologies, including fully automated LC-MS/MS assays, is expected to significantly impact the environmental assessment of multi-component substances. In turn, these technologies can help analytical scientists expand the knowledge about the presence, fate and persistence of known and newly identified multi-component substances and their degradation products found in the environment, allowing them to assess potential risks and develop, if necessary, remediation strategies and actions. [15]


February 26, 2017 | Corresponding Authors: Duane Huggett and Hank Krueger | DOI: 10.14229/jadc.2017.29.04.004

Received: April 27, 2017 | Accepted for Publication: April 27, 2017 | Published online May 3, 2017 |

Last Editorial Review: May 3, 2017

Featured Image: Close-up of an HPLC instrument pump (used for analytical chemistry work). Courtesy: © Fotolia. Used with permission.

Creative Commons License
This work is published by InPress Media Group, LLC (Challenges in Environmental Testing Multi-component Substances) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Copyright © 2017 InPress Media Group. All rights reserved. Republication or redistribution of InPress Media Group content, including by framing or similar means, is expressly prohibited without the prior written consent of InPress Media Group. InPress Media Group shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon. ADC Review / Journal of Antibody-drug Conjugates is a registered trademarks and trademarks of InPress Media Group around the world.

The post Challenges in Environmental Testing of Multi-component Substances appeared first on ADC Review.

Downstream Processing Considerations for Antibody Variant Therapeutics

$
0
0

Abstract
Novel platforms such as antibody derivatives, peptide based therapies, gene and stem cell based therapies are gaining foothold in the market for several reasons, including the need for better Pharmacokinetics (PK)/ Pharmacodynamics (PD), improved potency against disease targets, ability to treat more than one aspect of a disease simultaneously, better and cheaper production processes, reduced side effects and the biosimilar cliff.

In this article, we will focus on 3 types of antibody derivatives- namely Bispecific Antibodies (BsAbs), antibody fragments (Fabs), and fusion proteins. We will include an overview of each and discuss the typical downstream processes, highlighting specific process challenges. Scale up considerations will also be included.


1.0 Introduction
Monoclonal antibodies (MAbs) continue to dominate in terms of the class of therapeutics for the biotechnology industry. However, the overall trend in the biotherapeutics includes transitioning towards molecules that have higher value and improved bioavailability. Traditional Mabs are altered to achieve this goal, and antibody variants such as antibody fragments (Fabs), bispecific monoclonal antibodies (BsAbs), and fusion proteins are being explored.

The term Fab Antibody fragment is self explanatory- it is the Fab fragment from the variable region of an antibody [1]. A bispecific monoclonal antibody (BsAb) is composed of fragments of two different monoclonal antibodies that bind to two different types of antigens [2]. Fusion proteins are produced from gene fusion techniques that allow the production of recombinant proteins featuring the combined characteristics of the parental products [3].

In terms of common expression platforms, Fabs can be expressed in both mammalian and bacterial expression systems. Bacterial expression system is more common for Fabs and they can be expressed in E.Coli as either inclusion bodies or soluble (soluble being more common). BsAbs and fusion proteins are more typically expressed in mammalian cell cultures.

A typical downstream process for these antibody variants consists of

 

 

Fig 1: Typical Downstream Process (click on image to enlarge). Bioreactor/Fermentor > Harvest/Lysis (if bacterial)/Clarification > Capture > Polishing > Virus Clearance (if mammalian cell line) > UF/DF > Final Sterile.

In this article, we outline some of the unique requirements and challenges posed by these antibody variants in terms of recovery, purification, and scale-up/process transfer.


2.0 Part 1 – Recovery

 

 

 

Fig 2: Recovery: Bioreactor/ Fermentor > Harvest / Lysis (if bacterial) / Clarification (click on image to enlarge)

The vast majority of the current therapeutic antibodies including BsAbs and fusion proteins are still produced in mammalian cell lines in order to reduce the risk of immunogenicity due to non-human glycosylation patterns [1]. However, Fabs are more commonly produced in bacterial (E. Coli) due to their smaller size and economic considerations. Bispecific antibodies without any glycosylation could be successfully produced in bacteria as well.

For mammalian cell cultures (used for BsAbs and fusion proteins), normal flow depth filtration can be used for primary and secondary clarification steps for process volumes ≤ 2,000L. Depth filtration has also been shown to assist with removal of impurities such as HCP and DNA and improve downstream filter and column capacities. As titers and cell densities increase, the use of agents such as flocculation polymers/ acid precipitation are becoming more common at harvest.

For bacterial expression systems, clarification is often one of the most challenging steps. For soluble proteins, microfiltration- tangential flow filtration (MF-TFF) is often used instead of normal flow filtration. However, centrifugation followed by normal flow filtration (NFF) can be evaluated. Typically the TFF yields higher product recovery and is more economical. There is also a re-newed interest in older technologies such as using DE as a body feed for clarification. With secreted proteins, whole cells are separated from the fermentation broth and the particle size is thus larger; as a result, microfiltration (using TFF), centrifugation and normal flow filtration are all viable options. Endonuclease agents can also be used prior to clarification to digest DNA and RNA and to aid in the efficiency of the clarification process [2].

For sterile filtration post clarification, the capacity is influenced by the bioreactor/fermentor media components. Symmetric PVDF membranes are better suited for the sterile filtration of PEG and hydrolysate-containing media types. Asymmetric PES membranes are also available and can be evaluated. Sterile filtration step should be optimized with respect to product recovery, capacity and operating flux.


3.0 Part 2 – Purification

 

 

Fig 3. Purification: Capture > Polishing > Virus Clearance (if mammalian cell line) > UF/DF > Final Sterile (click on image to enlarge)

Protein A, followed by cation exchange and anion exchange, can be successfully used for the purification of BsAbs, fusion proteins, or Fabs containing the Fc region. For molecules that do not contain an Fc region, capture is typically achieved using cation exchangers or mixed mode resins in bind/elute mode depending on the molecular characteristics of the target protein. A subsequent polishing step for improving the resolution generally follows the capture step. This polishing step could be ion exchange (IEX) or hydrophobic interaction chromatography (HIC) depending on the previous step. And sometimes a third chromatographic step is required, depending on the separation results from the previous steps. In addition to resins, membrane adsorbers are also used for the polishing steps in (usually) a flow-through mode.

Virus filtration is not needed for bacterial expression systems due to the absence of adventitious viruses. For the proteins expressed in mammalian cell cultures, demonstrating viral clearance is a regulatory requirement. Some fusion proteins or BsAbs can be similar in size to virus filter membrane pore sizes (20nm), leading to significant process challenges in terms of filter capacities and flux rates. For these types of molecules, asymmetric PES parvovirus filter should be evaluated first, with and without prefiltration. If product recovery is an issue, regulatory agencies have accepted the use of non-parvovirus filters [2]. For smaller molecules, asymmetric PES parvovirus filters are recommended. Membrane-based prefilters could be used to normalize the feed (with respect to aggregate and impurity levels), increase capacity and reduce operating costs for this process step [2]. Special care should be taken when outlining the virus validation step as this will dictate achievable process loadings [2].

For the ultrafiltration/ diafiltration step, vendors typically recommend using filters 3-5X tighter than the molecular weight of the target molecule. Therefore, for Fab molecules that are typically small in comparison to Mabs (approximately 10kD – 80kD), 1-10 kD molecular weight cut-offs tangential flow devices are commonly used for the ultrafiltration/ diafiltration (UF/DF) step. As a result, the permeate fluxes can be lower. For BsAbs, the typical UF/ DF filters range from 30-50kD MWCO.

Additionally, some molecules can be PEG-ylated to improve bioavailability, which leads to higher viscosities as concentration increases. Also, because of the interest in subcutaneous applications, the target molecules are being concentrated to higher concentrations. For these reasons, the influence of the type of screen (screens help create turbulence and promote mass transfer) in flat sheet devices is paramount and should be taken into account in small scale optimization studies (1). A final consideration for E. coli expressed Fab molecules is downstream endotoxin removal. This is often achieved through anionic membrane adsorbers or charged membrane filters [1].

For the final sterile filtration, asymmetric PES membranes offer higher fluxes and capacities than symmetric membranes- however both PES and PVDF membranes should be evaluated at this stage. Attention should be paid to the final sterile filtration step as well, especially with respect to product recovery.


4.0 Part 3 – Scale Up and Process Transfer Considerations

Factors to consider when scaling-up current antibody and antibody variant processes depend on the stage and goals for the project, including the molecule’s pre-clinical or clinical phase, speed to market, process economics, manufacturing and operation flexibility, expertise, facility infrastructure, and batch volumes. Pros and cons of these factors can be weighted to decide how to proceed with the scale up logistics. In some cases, companies may lean towards utilizing single-use, stainless steel, or a hybrid for the manufacturing process. Additional considerations include building or using an existing facility, or outsourcing manufacturing to take advantage of the PD experience and infrastructure from contract manufacturing organizations (CMO).

Single-use processes have the inherent “out of the box, and ready to use” benefits, easing implementation.   With single use processes, users benefit from a lower investment in the upfront capital in comparison to a fixed facility with stainless steel systems, specific infrastructure requirements, such as steam and CIP/SIP, are likely not needed, and validation is minimal and/or eliminated [4]. For a multi-product facility, and in cases where batch volumes may vary and be less than 2000L, the additional benefits of a single-use approach may come from the quicker turnaround times from batch to batch, lower risk of cross product contamination, flexible volume manufacturing, and overall economics and facility fit. All of these factors contribute to the delivery of a process with speed to market needs, improved economics, and process flexibility.

A stainless steel facility can be considered for late stage molecules, multiple and large campaigns, and batch volumes greater than ~2000L where single-use systems may be a limitation.   In this case, the facility and equipment implementation would have a larger capital cost and initial validation investment; however, the long-term utilization of these assets can bring a return of investment and pay for itself over time, depreciation of equipment is incorporated, and other factors of a long term and multiuse facility may make economics more feasible [4]. A manufacturing facility may also incorporate a hybrid of single-use and stainless steel infrastructure to accommodate all needs of the project’s stage and goals. CMOs are well equipped with both types of facilities and process expertise, which may be more appealing in cases where there is a facility throughput limitation and/or speed to market may require outsourcing.

In addition to specific facility needs, each unit operation has its own specific rules for scale-up. In some cases, linear scalability can be accomplished for some technologies such as filtration; however, system scale-up is sometimes overlooked and can be the cause for deviation or unexpected process performance [5]. Fluid dynamics, hold-up volumes, frictional losses, hardware requirements and yield recoveries are factors to strongly investigate prior to scaling up. Ultimately, thorough process transfer studies must be completed to ensure the process meets specifications.

It is important to consider hold-up volumes of not only the devices utilized in the process, but also of the system itself and the impact this has on overall process recoveries. In some cases systems are installed in cramped spaces, which may require device selection and physical attributes of the tubing/piping to include turns and differential in height. Along with hold-up volumes and system/piping design all of which contribute to frictional losses, fluid dynamics (viscosity, temperature, flowrate is another factor to help determine the system component requirements for each unit operation [5]. In addition, when scaling up, researching the type of hardware to be used at large scale is evident, however, at times, not given enough attention. For example, there are many devices out on the market that are fully encapsulated at small scale, but for equivalent larger scales these may require holders. Considerations of the large scale hardware systems must be addressed within the different unit operations. Some of these include physical attributes, automation and footprint. Finally, yet important, proper validation of the systems and process should be completed prior to scaling up or even manufacturing of clinical material.

In addition to specific facility and system needs, all of the unit operations share a common ground of considerations for implementation and tech transfers. One of these considerations is for companies to further investigate each molecule’s process operating conditions via Design of Experiments (DoE) or even a deeper dive into a Quality by Design (QbD) approach. Another consideration is to understand raw material and consumable lot-to-lot variability, and the processes batch to batch variability. These factors can provide a better understanding of each unit operation and the performance of the process as a whole, which can contribute to the robustness and possible higher degree/window of operation. In cases where these deeper approaches may not be feasible, an upfront investment of rationally defined safety factors can be incorporated for all unit operations to minimize the risk for process deviations [6].


5.0 Conclusions
Antibody variants such as Fabs, BsAb and fusion proteins are generating increased interest as the demand for target therapeutics with improved efficacy continues to grow. Compared to traditional MAB processes, these molecules present some developing and manufacturing challenges. Each of the steps in recovery and purification of these molecules must be optimized based on the process requirements and the molecule characteristics, ensuring robust, stable and scalable production processes.


January 9, 2015 | Claire Scanlan | Mireille Deschamps | Juan Castano | Ruta Waghmare, PhD | Corresponding Author Ruta Waghmare , PhD | ruta.waghmare@emdmillipore.com | doi: 10.14229/jadc.2015.1.9.001

Received: December 10, 2014 | Accepted January 7, 2014 | Published online January 9, 2014

Creative Commons License
This work is published by InPress Media Group, LLC (Downstream Processing Considerations for Antibody Variant Therapeutics by Claire Scanlan, Mireille Deschamps, Juan Castano, Ruta Waghmare, PhD) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.

Last Editorial Review: January 9, 2015

Add to Flipboard Magazine.


The post Downstream Processing Considerations for Antibody Variant Therapeutics appeared first on ADC Review.

Editorial: Utilization of Breakthrough Therapy Designations for Market Access

$
0
0

In the pharmaceutical marketplace, time-to-market is crucial, with companies seeking viable strategies to help hasten the review process in the United States. Manufacturers must meet the requirements of the Food and Drug Administration Safety and Innovation Act or FDASIA for short, signed into law on July 9, 2012, which expanded the FDA’s authorities and strengthened the agency’s ability to safeguard and advance public health. The Advancing Breakthrough Therapies for Patients Act was incorporated into a Title of FDASIA in order to expedite the review process of novel therapies that are showing very promising results in early phase clinical trials. [1]


According to a peer-reviewed article published in Clinical Cancer Research, a breakthrough therapy designation (BTD) must meet specific criteria. First, the disease in question must be life-threatening and highly debilitating. Also, it must not have a standard of care treatment, or the standard of care treatment has failed to show efficacious clinical improvement. In addition, the product in clinical development must demonstrate substantial improvement compared to a therapy in the same class, or be superior to the current standard of care. Finally, the product must show promising or superior outcomes in early stage trials.[2] If the product meets these criteria, it may be granted the BTD.


Fig 1.0 – Overview Granted of Breakthrough Therapy Requests.

1.0 Impact on Oncologic Market Access
Breakthrough Therapy Designations (BTDs) have captured the attention of many hematology and oncology drug manufacturers. According to EP Vantage, 141 BTDs have been requested, and 37 applications have been accepted by the FDA since 2012.[3] The Center for Drug Evaluation and Research or CDER reported that 42% of all granted BTDs fall into the disease categories of hematology and oncology, as of December 2013.[4] Clearly, pharmaceutical companies specializing in these disease areas are taking notice of this accelerated market access program.

While 37 applications have been accepted, only four products have had NDA approval. Of these four products, three are indicated for oncology use. Obinutuzumab (Gazyva®; Genentech) and ibrutinib (Imbruvica®; Pharmacyclics/Janssen Biotech) are both indicated for chronic lymphocytic lymphoma (CLL). In addition, ibrutinib is indicated for mantle cell lymphoma. As for the oncology market, BTDs represent the best option for expedited market access. On average, a product takes approximately 88 months to gain market approval. With a BTD, the approval process can take as little as 53 months to gain market access.[5] For example, if a BTD is granted to a drug in Phase II, this gives the product the potential to reach market three years faster when compared to a product without a BTD. This kind of accelerated market access will lead to a greater length of market exclusivity, and give manufacturers increased time for sales and marketing.


Fig 2.0 – For Serious and Life-threatening diseases, including cancer, the U.S. Food and Drug Administration (FDA) can grant specific designations to trial drugs that may help accelerate their time to approval. If the FDA grants accelerated approval, patients may receive a trial drug while ongoing Phase III studies confirm safety and efficacy. Source: FDA. Click on Image to enlarge

2.0 Pricing and Market Access Considerations for Oncology Products
The accelerated review of products with a BTD requires additional planning for pricing, reimbursement and market access (PR&MA). On most occasions, PR&MA planning is done during Phase III of product trials. With a BTD, a company must be ready to start this process during Phase II. According to Ram Subramanian, a pricing, strategy and marketing expert at Simon-Kucher & Partners, “Payers will find it more difficult to restrict access to a drug whose therapeutic value has been singled out by the FDA, and which may have already generated excitement among physicians and patients.[5]

In other words, an oncology product gaining a BTD can potentially ease payer scrutiny. Nevertheless, the company must still construct a compelling value proposition to present to these stakeholders.

An article written by Simon-Kucher & Partners published in OBR Oncology, identified three value drivers that will help develop a meaningful value proposition. Overall Survival (OS), Progression-Free Survival (PFS), and Safety Profile (SP) are the most compelling components of clinical results that payers want to see5. In the case of oncology products, OS and PFS data demonstrate efficacy to payers. Without this data, payers may be skeptical of the product and will require alternative and compelling evidence. A company may need to consider this challenge, and find ways to compensate and demonstrate value — in contrast to the lack of data generated by a product undergoing an accelerated review process.


3.0 Payer Uncertainty with Breakthrough Therapy
A certain amount of skepticism and uncertainty exists surrounding how BTD products undergoing clinical trials will be priced upon market approval. At a conference sponsored by Friends of Cancer Research in September 2013, an Aetna representative stated that payers are “nervous” about these products.[6] Michael Kolodziej, national medical director of Oncology Solutions, stated, “We recognize unmet need. We recognize the therapies that are being thrown at these diseases are not very effective. We would like much more effective treatment. We are not fools, however, and there are no new drugs to come to market that are cheaper than old drugs… So, we have to find the way to get the right drug to the right patient.”[6]

In addition, reimbursement surrounding products with BTD is uncertain. Primarily, payers are concerned about the prices that these new products will bring to the market.


4.0 Conclusion
For manufacturers, BTDs could bring potentially significant benefits to their oncology pipelines. With market approval, a product’s market access will significantly increase which could cause significant challenges when gaining support from various market access stakeholders. Payers in the oncology market, as well as all other therapeutic areas, are generally concerned about the prices that these novel therapies will demand.


March 31, 2015 | Corresponding Author Sophie Murdoch | doi: 10.14229/jadc.2015.3.31.001

Received: March 30, 2015  | Published online March 31, 2015 | This submitted editorial has not been peer reviewed.

Creative Commons License
This work is published by InPress Media Group, LLC (Editorial: Utilization of Breakthrough Therapy Designations for Market Access) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.

Last Editorial Review: March 31, 2015

Add to Flipboard Magazine.


The post Editorial: Utilization of Breakthrough Therapy Designations for Market Access appeared first on ADC Review.

Challenges in Environmental Testing of Multi-component Substances

$
0
0

1.0 Abstract
An Environmental Risk Assessment or ERA needs to clearly identify hazard and exposure to evaluate risk. Having good analytical chemistry methods is essential in measuring environmental concentrations in field samples as well as in hazard based hazard-based testing to determine the effects of a specific chemical. These methods are also needed to determine physical chemical properties used in models to predict environmental concentrations of chemicals.

Developing methods can be challenging enough for one test chemical, however, multi-component or multi-constituent substance, substances present an even greater significant challenge. According to the European Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) definition, multi-constituent substances are defined by their quantitative composition in which more than one main constituent is present in a concentration ≥10% (w/w) and <80% (w/w). The definition also states that components or constituents with a concentration <10% (w/w) should be identified as impurities, and it requires that impurities ≥1% (w/w) be specified by at least one classifier, such as name, CAS Registry number, etc. In addition, the U.S. Toxic Substances Control Act or TSCA regulates the introduction of new or already existing chemicals.

The successful analysis of multi-component or multi-constituent substances requires a combination of in-depth knowledge of the chemical process used to manufacture multi-component or multi-constituent substances. As a result, analytical techniques used for multi-constituent substance characterization must have the capacity to distinguish between the various components present and generate direct evidence of their chemical structure and concentration. In order to be successful, a number of questions need to be answered before the start of the analytical process: What toxicity and physical/chemical data are available? What models are available? What are the matrices of interest (water, soil, sediment, air, animal or plant tissue)? What range of concentrations do we need to achieve in matrices of interest? What are the limiting factors in achieving good recoveries in matrices on interest? Stability and homogeneity of the test substance is key in many of the matrices (e.g., diet, water, sediment).

In order to meet the analytical requirements of regulatory agencies, scientists depend on more sensitive and rapid analytical techniques than the traditional technologies of gas chromatography (GC), gas chromatography mass spectrometry (GC/MS) and high-performance liquid chromatography (HPLC). These more sensitive analytical technologies may include liquid chromatography coupled to mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS), allowing quantitative analysis according to regulatory requirements as well as aiding in compound screening, identification and confirmation.

In this article, the authors discuss some of the key aspects involved in the analytical methods involving multi-component or multi-constituent substances, as well as a number of applicable regulatory requirements.


2.0 Introduction
The number of multi-component or multi-constituent substances found in the environment is rapidly increasing. At the same time, our understanding of both regulated and unregulated chemicals, often with highly diverse structures and broad biological activities, and the (medicinal) (bio)pharmaceuticals and personal-care products involved, is growing.

The increased occurrence as well as the added complexity of multi-component substances presents a major challenge for environmental analytical chemists. This challenge can be met with extensive analytical methods and comprehensive characterization using a variety of techniques and methods to confirm the occurrence of a particular chemical in the environment.

Definition: What are multi-component or multi-constituent substances
Multi-component or multi-constituent substances are, according to the European Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) regulatory framework, defined by their quantitative composition in which more than one main constituent is present in a concentration ≥10% and <80% (w/w). The definition also states that components or constituents with a concentration <10% (w/w) should be identified as impurities, and it requires that impurities ≥1% (w/w) be specified by at least one classifier, such as name, CAS Registry number, etc. [1]

In addition to REACH, the manufacturers of Fertilizers and Related Materials (FARM) have joined forces for their REACH compliance activities by launching the FARM REACH consortium in December 2008. They define multi-constituent substances as preparations. In their view, multi-constituent substances, irrespective of the manufacturing method of a final chemical product used, always contain a combination of a few (simple) ionic components. [1]

In the United States, the U.S. Toxic Substances Control Act of 1976 or TSCA regulates the types of chemicals that can be used in manufacturing, as well as the introduction of new chemicals. The act, which specifically mandates the United States Environmental Protection Agency (U.S. EPA) to protect the public from “unreasonable risk of injury to health or the environment” by regulating the manufacture and sale of chemicals, is essential given the large number of chemicals and multiple -consitituent constituent substances subject to control, each with myriads ofmyriad multiple and synergistic (toxic) effects and multiple pathways of exposure.


3.0 Significant environmental risk
A key question, asked by analytical scientists and regulators alike, is whether the combined low concentrations of chemicals, (medicinal) (bio)pharmaceuticals and personal-care products found in multi-component substances in the aquatic environment have a significant, recognizable effect on ecologic function. A second, equality equally important, question asks if multi-component substances pose a long-term risk to human and veterinary health. [2]

These questions are complicated by the fact that while the concentrations of these chemicals, (medicinal) (bio)pharmaceuticals and personal-care products may be relatively low – measured on the sub–part per billion or sub-nanomolar level – a number of these substances may share a specific mode of action (MOA), which, in turn, could lead to possible significant environmental effects through additive exposure. The concern is that these substances, which may enter our environment from a variety of routes, may escape detection if they are present at concentration concentrations below detection, while still having hazardous long-term cumulative effects. This may especially be the case with residues of specific pharmaceuticals or personal carepersonal-care products found in the aquatic environment. [2]

The reason for their concern is that a number of these pharmaceuticals are specifically designed to modulate endocrine and immune systems and cellular signal transduction. They have the potential to function as endocrine disruptors. Other agents found in the aquatic environment, including antibiotics for the treatment of human and veterinary disease, may contribute to the potential of resistance to human pathogens. [3]

While anti-cancer drugs and antibiotics are generally administered to “cure” disease, other pharmaceuticals are largely designed to manage or control symptoms of chronic disease, leading to long-term use. Analytical scientists and regulators are concerned about the (in many cases) still-unknown effects of unintentional and long-term exposure to these agents in a healthy population. [4]

Complementing this concern is the fact that select multi-component substances “surviving” various steps of metabolism and other degradative or sequestering actions may create an environmental risk. In some cases, these products may even be more bioactive than the original compound, adding to the potential risk. [2]


4.0 Regulated and unregulated chemicals
Analytical scientists have, for many decades, centered their research on regulated chemicals included in various legislation in the United States and Europe. With the availability of more sensitive analytical methods such as liquid chromatography coupled to mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS), which allow the detection of chemicals used in everyday life, such as surfactants and surfactant residues, pharmaceuticals and personal-care products, and gasoline additives, as well as determining their biological effects, scientists now are able to analyze unregulated contaminants that either went undetected before or were not considered a risk. A number of the chemicals in high-risk multi-component substances, including detergent metabolites, steroids, and prescription and non-prescription drugs, are among the compounds most frequently found in high concentrations in the aquatic environment. [4]


5.0 Human health and regulations
In addition, due to the potential implications of these compounds on human and veterinary health, environmental analysis, as part of the regulatory approval process related to the manufacturing of novel (bio) pharmaceuticals, usually includes rigorous quality assurance and quality control (QA/QC) metrics designed to confirm the reliability of the analytical data.

Overall, the regulatory expectations to better understand product impurities and degradants in biopharmaceutical products continue to increase, making environmental risk assessment infinitely more complex, especially if it involves multi-component substances.


6.0 Why so complex?
Analyzing multi-component or multi-constituent substances is complex because of the difference between perceived facts and reality. For example, in analyzing multi-component or multi-constituent substances, it may be assumed that the materials or substances looked for are of a well-known toxicology and physiology, with well-known chemical and biological properties and known concentrations. In reality, specific data may be missing, chemical properties may be unknown or poorly understood, and finding the range of the concentration may be challenging. At the same time, questions may arise about the kind of mixture to be tested (what are we testing?), the formulation (do co-solvents alter the formulation?), insoluble materials (how low do we need to go?), polymers (differences in chain lengths) and testing near solubility (stock volumes).

Hence, successful analysis of multi-component substances requires in-depth knowledge of the chemical process used to manufacture multi-component or multi-constituent substances.

Characterizing multi-constituent substances is also complex because the analytical methods used must have the capacity to distinguish among all the substances present and must be able to generate direct evidence of their chemical structure and concentration.

According to REACH, this includes impurities down to a level of ≤1% (w/w) of the substance impacting the overall hazard classification. Because REACH defines multi-constituent substances as the product of a chemical reaction, comprehensive knowledge of the actual chemical process used to prepare the multi-constituent substance is essential when selecting analytical techniques that are meaningful for each substance, as well as when deciphering the test results. [1]

In the development of these meaningful analytical procedures, different analytical technologies designed to provide the means to acquire structural and compositional data, can be applied. This choice largely depends on the chemical nature and the complexity of the multi-constituent substance, but it should include either simultaneous separation-analysis techniques or an analysis of the reaction mass without physical separation of each of the constituents. [1]


7.0 Analytical methods
In their Guidance for Identification and Naming of Substances under REACH, the European Chemicals Agency (ECHA) offers limited details concerning spectroscopic and chromatographic methods that can be used to characterize multi-constituent substances. Among the methods they list as techniques that can be used to confirm the composition of multi-constituent substances are mass spectroscopy spectrometry (MS); gas chromatography (GC); gas chromatography–mass spectrometryspectroscopy (GC-MS); high-performance liquid chromatography (HPLC); ultra-performance liquid chromatography (UPLC), a relatively new technique offering new possibilities in liquid chromatography; and liquid chromatography–tandem mass spectrometry, or LC–MS/MS. [1]

Overall, the Guidance explains that methods offering simultaneous separation and analysis have the potential to significantly contribute to the characterization process. [1]


8.0 Gas chromatography
Gas chromatography is a technology often used in detecting volatile organic compounds (VOCs). This chromatography technique, extensively used in the analysis of pharmaceutical products, allows the analysis of impurities in pharmaceuticals as well as the identification of residual solvents listed by the International Conference of Harmonisation (ICH), making accurate quantitative determination of complex mixtures possible. This includes traces of multi-component or multi-constituent substances down to parts per trillion.


9.0 Gas chromatography–mass spectrometry
Gas chromatography is generally a reliable and effective method for separating compounds into their various components. However, it may not always be used for reliable identification of specific substances in the analysis of multi-component or multi-constituent substances.

In these cases, gas chromatography–mass spectrometry (GC-MS) may be used for conclusive proof of identity. By heating a mixture to separate the elements, GC-MS separates the chemical elements of a certain (unknown) compound to identify its molecular-level components. After vaporizing this mixture, the effluent of gas chromatography is fed into a mass spectrometer, where the chemical elements can be separated. This, in turn, leads to the identification of the components through the mass of the analyte.

GC-MS is commonly used for confirmation testing of substances in pharmaceutical drug testing, quality control and environmental assessment.


10.0 High-performance liquid chromatography
One of the primary analytical tools to assess environmental occurrence of multi-component or multi-constituent substances is high-performance liquid chromatography or HPLC. High-performance liquid chromatography is an advanced form of liquid chromatography used to separate, identify and quantify the components in complex mixtures in both chemical and biological environments. This analytical technique is commonly used to determine the molecular species present in a specific multi-component sample.

The underlying principles guiding HPLC are based on the Van Deemter equation, an empirical formula describing the relationship between linear velocity (flow rate) and plate height (height equivalent of theoretical plate [HETP] or column efficiency). According to the Van Deemter equation, a decrease in particle size not only allows significant gain in efficiency, but this efficiency does not diminish at increased flow rates or linear velocities. [5]

Although HPLC helps analytical chemists answer many questions, it lacks, as a result of the proprietary nature of column packing, long-term reproducibility.


11.0 Ultra-performance liquid chromatography
Ultra-performance liquid chromatography (UPLC; Waters) is based on the same principles as HPLC and used in similar applications, focusing on small-molecule analysis. The use of smaller columns that are tightly packed with smaller particles (up to sub 2-m) increases speed, resolution and sensitivity. As a result, UPLC significantly improves chromatographic separations. But this may also present unique challenges. UPLC systems and columns require higher levels of care and attention compared with traditional HPLC. To gain these benefits requires adding fittings and pumps designed to support high system back-pressure. [5][6][7][10]

Depending on the actual system used, system back-pressures used in UPLC may reach values of 100 MPa. In contrast, in HPLC the maximum is between 35 and 45 MPa. This means, for example, that an analysis performed using UPLC can withstand back-pressures of about 90 MPa, which is not possible when using conventional HPLC. [10]

Although UPLC is generally used for small-molecule analysis, the technique is also used for the analysis of proteins. While protein chromatography has generally suffered from problems related to carryover, peak splitting, peak broadening and poor peak shape, using smaller particles at a higher pressure and flow rate – as in the case of UPLC – has remedied many of these problems. Furthermore, when combined with mass spectrometry, UPLC becomes a powerful tool used to separate, identify, characterize and quantify (intact) proteins. [7]


12.0 Liquid chromatography–mass spectrometry
One of the first applications of liquid chromatography–mass spectrometry, also known as LC-MS, was the detection of specific pharmaceuticals and their metabolites within biological fluids in pharmacokinetic studies. While this is still a major application today, LC-MS has also become a powerful and invaluable technique adapted for the detection and trace analysis of polar compounds in aqueous samples from the environment. [8]

Used for many applications, LC-MS combines the physical separation capabilities of high-performance liquid chromatography (HPLC) with the mass analysis capabilities of mass spectrometry, offering a range of advantages due to its high sensitivity and mass selectivity.

Liquid chromatography coupled to mass spectrometry (LC-MS) and tandem mass spectrometry (LC-MS/MS) has become a technique of choice for the analysis of high-risk chemicals and (bio) pharmaceuticals found in multi-component substances. This includes endocrine disruptors, causative agents of bacterial resistance (as a result of antibiotic use) and alkylphenolic surfactants.

LC-MS has many specific benefits. Because it enables the identification and quantification of substances without derivatization, this technique is faster, more convenient and more sensitive compared with other methods. In some cases, LC-MS has been used to analyze multi-component or multi-constituent substances that could not be determined before by using different technologies and methods. Furthermore, because of superior sensitivity, selectivity, flexibility and a wide range of metabolite detection, LC-MS has been a promising platform in the analysis of low-abundant metabolites. Overall, the use of LC-MS results in lower detection limits.

The advancement of LC-MS has largely been made possible by changed interface designs. Over the last decade, they have become more sophisticated and efficient. Among the most widely used interfaces for LC-MS analysis of steroids, drugs and surfactants in the aquatic environment are electrospray ionization or ESI (for analysis of polar compounds) and atmospheric pressure chemical ionization or APCI (for the analysis of medium- and low-polarity substances). [9]

However, although LC–MS profiling has become more advanced, it may not always be sensitive enough to detect and characterize metabolites at trace levels. Hence, liquid chromatography–tandem mass spectrometry, or LC–MS/MS, has been developed.


13.0 Liquid chromatography–tandem mass spectrometry
Liquid chromatography–tandem mass spectrometry or LC-MS/MS is widely used for highly selective and sensitive bioanalysis of small molecules and offers higher sensitivity and selectivity in the trace analysis of multi-component or multi-constituent substances. In contrast to conventional photodiode array detection (PDA), analytes do not have to be fully resolved to be identified and quantitated. Furthermore, chemical derivatization is not required nor needed, as, for example, in the case of gas chromatography-mass spectrometry (GC-MS).

LC-MS/MS has evolved into a vital, strategic and powerful qualitative and quantitative analytical technique with a wide range of clinical applications, including therapeutic drug monitoring (TDM), toxicology, microbiology, the emerging field of proteomics and other applications.

One of the benefits of LC-MS/MS is that it allows analytical chemists to multiplex, helping them to identify and quantify multiple analytes simultaneously, reducing cost per test. Cost savings are further realized by increased throughput via simplified or minimal sample preparation for a variety of applications, including dilute-and-shoot or protein crash, compared with more time-consuming and expensive sample preparation methods like solid phasesolid-phase extraction (SPE) or derivatization.


14.0 A different strategy
A different analytical strategy is the analysis of the reaction mass without physical separation of each of the constituent substances. This characterization process may be adopted when physical separation using standard technologies to unequivocally confirm the composition of the multi-component or multi-constituent substance may be difficult or impossible.

This alternative strategy uses light in the infrared (IR) spectrum, which interacts with the bonds in molecules and resonates at particular frequencies. And while the “light” absorption signatures of some of the molecules of interest can be quite weak, if relevant signals can be sufficiently detected and identified, this non-separation approach may involve the use of so-called “spectral fingerprints.” [1] However, when comparing these spectral fingerprints with reference data, special care needs to be given to the fact that the data used may have been obtained using a number of different instruments, conditions or methods, limiting reproducibility. [1][11]


15.0 Matrix effect
In the analysis of multi-component or multi-constituent substances, the matrix effect, historically associated with bioanalytical methods, refers to the specific effects caused by all other components in a sample except for the target compound to be quantified. These effects may be caused by either endogenous or exogenous composites in the sample.

In the analysis of very complex matrices, even when using selected reaction monitoring or SRM detection, false negative results (as a result of matrix ionization suppression effects) and false positive results (due to insufficient selectivity) can occur. [12]

The incidence of matrix effects in LC-MS/MS methods has led to an increased understanding of the factors contributing to the occurrence of these effects and how to handle them. Improvement in the instrumentation and analytical methodology, including modified ionization, ionization switching and extraction modification, has improved the reproducibility and robustness of LC-MS/MS. [4][13]


16.0 Increased selectivity
Due to its inherent selectivity, sensitivity, flexibility and multi-component capability, the application of LC-MS/MS has rapidly expanded in recent years. Using LC-MS/MS, analytical chemists can solve challenging clinical and biomedical research problems more thoroughly and efficiently than was previously possible. Novel technology also offers analytical chemists reproducibility, selectivity and sensitivity often unattainable with immunoassays.


17.0 Sample preparation
However, with all the advanced analytical methods and techniques available, there remains one crucial aspect to be considered: sample preparation. LC-MS and LC-MS/MS analysis of multi-component or multi-constituent substances requires sensitive and robust assays. Analysis often involved involves very complex samples, requiring expert preparation protocols designed to remove unwanted components as well as selectively extract components of interest. Hence, before selecting a specific testing method, analytical chemists need to answer a number of key questions, including:

  • How high do we need to test?
  • What are the limiting factors in achieving nominal concentrations?
  • Is it necessary for test concentrations to be between 80% and 120% of nominal?
  • What toxicity and physical-chemical data are available?
  • What are the test concentrations in aquatic testing? and
  • What models are available?

Some answers may be found as recommendations and (regulatory) guidelines. For example, the guideline recommendations for the selection of test concentrations for aquatic testing limit the concentration of pesticides to 100 mg/L and industrial chemicals to 1,000 mg/L. Likewise, in the development of aquatic tests, a number of important questions related to functional solubility need to be considered, including stability vs. degradation (hydrolysis and photolysis) and absorption vs. volatility. With multi-component substances in aquatic tests it may be necessary to run Water Accommodated Fraction (WAF) trials, followed by toxicity tests. One could then develop methods to identify the key analytical components in the WAF to determine which components are likely causing the toxicity. If there are no effects, than then additional testing would not be warranted.


18.0 Conclusion
The use of advanced LC-MS and LC-MS/MS technologies for the environmental assessment of multi-component substances has allowed analytical chemists to define a large number of compounds, especially polar compounds, that previously were either difficult or even impossible to analyze. [3]

Moreover, the introduction of novel interfaces and triple quadrupole analyzers has improved the sensitivity and selectivity of detection. Consequently, the analysis of steroids, many pharmaceuticals and alkylphenolic surfactants in the environment is possible at the ng/L and ng/g level, and even at the pg/L and pg/g level. [8]

Although enhanced selectivity and sensitivity and rapid, generic gradients have made LC–MS the predominant technology for both quantitative and qualitative analyses, the most important value and application of current LC-MS techniques is the determination of known target compounds, because the capacity of these techniques for screening and identification of unknowns is relatively low. [15]

Despite the high selectivity of LC-MS–based methodologies, and in particular of LC-MS/MS, false negative findings can still occur due to the often high complexity of multi-component substances. To solve this problem, rigorous confirmation and identification criteria in terms of retention time, base peak and diagnostic ions, and relative abundances remain crucial. [8] [14]

In routine analyses it is important to consider speed, sensitivity and resolution. However, the costs associated with the analysis and the associated column maintenance also should also be considered. This, in turn, involves choosing the appropriate mobile phases, careful system washing, and application of adequate flow rates with regard to the column and system properties.

Finally, a proper approach to sample pretreatment remains an indispensable part of the analytical workflow. In recent decades, important progress has been made with regard to the preparation of samples, including compensation for matrix effects. This is important when considering the nature of matrix interference in LC-MS analysis. [6]

The development of new technologies, including fully automated LC-MS/MS assays, is expected to significantly impact the environmental assessment of multi-component substances. In turn, these technologies can help analytical scientists expand the knowledge about the presence, fate and persistence of known and newly identified multi-component substances and their degradation products found in the environment, allowing them to assess potential risks and develop, if necessary, remediation strategies and actions. [15]


February 26, 2017 | Corresponding Authors: Duane Huggett and Hank Krueger | DOI: 10.14229/jadc.2017.29.04.004

Received: April 27, 2017 | Accepted for Publication: April 27, 2017 | Published online May 16, 2017 |

Last Editorial Review: May 16, 2017

Featured Image: Close-up of an HPLC instrument pump (used for analytical chemistry work). Courtesy: © Fotolia. Used with permission.

Creative Commons License
This work is published by InPress Media Group, LLC (Challenges in Environmental Testing Multi-component Substances) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Copyright © 2017 InPress Media Group. All rights reserved. Republication or redistribution of InPress Media Group content, including by framing or similar means, is expressly prohibited without the prior written consent of InPress Media Group. InPress Media Group shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon. ADC Review / Journal of Antibody-drug Conjugates is a registered trademarks and trademarks of InPress Media Group around the world.

The post Challenges in Environmental Testing of Multi-component Substances appeared first on ADC Review.

Environmental Risk Assessment and New Drug Development

$
0
0

1.0 Abstract
In our globalized world, human pharmaceutical residues and traces of other (chemical) down-the-drain contaminants have become an environmental concern. Following the detection of (pharmaceutical) drug residues in drinking and surface waters , regulatory agencies around the world, including the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA), have developed detailed guidance on how pharmaceutical products should be assessed for possible adverse environmental effects.

Hence, an Environmental Risk Assessment or ERA is required as part of the clinical development, regulatory submission and marketing authorization of pharmaceuticals. This is mandatory for drugs both for the treatment of human diseases as well as veterinary use.

Using fate, exposure and effects data, an environmental risk assessment or ERA evaluates the potential risk of (new) medicinal compounds and the environmental impact they cause.

Despite the available guidance from regulatory agencies, regulatory policy is complex, and a number of aspects related to ERA remain unclear because they are not yet well defined. Furthermore, the specific requirements are not always straightforward. Moreover, while some types of chemicals are exempt (e.g., vitamins, electrolytes, peptides, proteins), such exemption may be overruled when a specific mode of action (MOA) involves endocrine disruption and modulation.

In this white paper, which focuses on human pharmaceuticals rather than veterinary pharmaceuticals, the author reviews topics ranging from regulations and environmental chemistry to exposure analysis and environmental toxicology. He also addresses key aspects of an ERA.


2.0 Introduction
The effective functioning of a modern, healthy society increasingly demands developing novel therapeutic agents for the treatment of human and veterinary disease as well as the new and emerging technologies that form the foundation for advancement. A proper understanding of environmental health and safety risks that may have been introduced into the environment as part of developing these new medicines is an important part of this process.

To understand these risks, Environmental Risks Assessments or ERAs are designed to systematically organize, evaluate and understand relevant scientific information. The purpose of such assessment is to ascertain if, and with what likelihood, individuals are directly or indirectly exposed to (novel) medicinal compounds, (bio) pharmaceutical products or active pharmaceutical ingredients in our immediate environment, as well as the consequences of such exposure. The information can then be used to assess if the use of these agents may result in unintended health-related impairment or harm as the result of such exposure, as well as the impact these agents may have on a globalized world. [1]

3.0 Exposure
Exposure may occur if humans come into contact with (novel) medicinal compounds, (bio) pharmaceutical products or active pharmaceutical ingredients. And while therapeutic agents may be intended to cause some measure of harm – for example, chemotherapeutic agents in the treatment of patients with various forms of cancer designed to “kill” malignant cells – unintended environmental exposure may, in turn, cause unintended serious adverse events. In many cases, such exposure may be limited to trace levels of the active pharmaceutical ingredient.

Over the past 30 years, the impact of such exposure, as well as its implications, have become clearer. Because early analytical equipment was not very sensitive, traces of (novel) therapeutic and medicinal compounds, (bio) pharmaceuticals and active pharmaceutical ingredients were not easily detected in the environment until the 1990s. The result was that the impact of these agents in the environment was generally considered nonexistent and unimportant.

However, since the late 1940s, scientists have been aware of the potential that a variety of chemicals are able to mimic endogenous estrogens and androgens. [2][3][4]

The first accounts indicating that hormones were not completely eliminated from municipal sewage, wastewater and surface water were not published until 1965, by scientists at Harvard University, [5][6] and it was not until 1970 that scientists, concerned with wastewater treatment, probed to what extent steroids are biodegradable, because hormones are physiologically active in very small amounts. [7]

However, the first report specifically addressing the discharge of medicinal compounds, pharmaceutical agents or active pharmaceutical ingredients into the environment was published in 1977 by scientists from the University of Kansas. [8]

Despite these and many other early findings, the subject of medicinal compounds such as steroids and other pharmaceutical residues in wastewater did not gain significant attention until the 1990s, when the occurrence of hermaphroditic fish was linked to natural and synthetic steroid hormones in wastewater. [9]

In numerous studies and reports, researchers hypothesized and confirmed that effluent discharge in the aquatic environment, such as municipal sewage, wastewater systems as well as surface waters, contained either a substance or (multiple) substances, including natural and synthetic hormones, that are estrogenic to fish, affecting their reproductive systems. [10]

In time, scientists confirmed that these adverse effects, and implications of endocrine disruption and modulation, were caused by residues of estrogenic human pharmaceuticals. [1]

After discovering hermaphroditic fish in and near water-treatment facilities, scientists identifying the estrogenic compounds that were most likely associated with this occurrence confirmed that substances such as ethynylestradiol, originating from pharmaceutical use, generated a similar effect in caged fish exposed to levels as low as 1 to 10 ng 1−1 and that positive responses may even arise at 0.1 to 0.5 ng 1−1. [9]

Although it was now recognized that the therapeutic agent or active pharmaceutical ingredient itself was biologically active, experts generally believed that there was only a limited environmental impact during manufacturing; and because these therapeutic agents were only manufactured in relatively small amounts, they were not concerned about the potential environmental risk of pharmaceutical residues and trace contaminants. [1]

4.0 Pharmacotherapy
Today, with pharmacotherapy a common part of our daily life, many concerned citizens realize that pharmaceutical residues and trace contaminants may represent an increased environmental risk with potential consequence for human and animal health. [1]

And although the concentrations of these residues rarely exceed the level of parts per billion (ppb), limiting acute toxicity, the emergence of these residues and traces in the environment fundamentally changed the way we look at the (potential) risk of these active pharmaceutical ingredients in the ecosystem. [1]

But regulators have also come to understand that environmental risk assessment developed for non-medicinal chemical containment cannot necessarily be applied to (novel) medicinal compounds, (bio) pharmaceutical products or active pharmaceutical ingredients. They understand that protecting the environment, while at the same time improving human and animal health, requires a better understanding of how to protect the environment (the ecosystem) as well as the active pharmaceutical ingredient in its own regulated environment.

5.0 Value for society
The issue of medicinal compounds, (bio) pharmaceutical agents and active pharmaceutical ingredients in our environment is complex. This complexity is, in part, derived from the medicinal value of these compounds and the general acceptance that patient use – and therefore the excretion of active pharmaceutical ingredients into the environment and, as a result, the potential of harmful effects to the ecosystem and human health – rather than other methods of release, is the primary reason why we find traces of these agents in our environment. [11]

There is no doubt that modern medicines developed by research-based pharmaceutical companies have brought tremendous value. For example, the development of antibiotics generated enormous gains in public health through the prevention and treatment of bacterial infections. In the 20th century, the use of antibiotics aided the unprecedented doubling of the human life span. [12][13]

Before the development of insulin in the late 1920s and early 1930s, people diagnosed with diabetes (type 1) were not expected to survive. In 1922, children with diabetes rarely lived a year after diagnosis. Five percent of adults died within two years, and less than 20% lived more than 10 years. But since insulin became available, the drug has become a daily routine for people with diabetes, creating a real survival benefit and making the difference between life and death. [14]

Pharmaceutical agents have also drastically impacted social life. The introduction of the pill in the early 1960s, for example, affected women’s health, fertility trends, laws and policies, religion, interpersonal relations, family roles, women’s careers, gender relations and premarital sexual practices, offering a host of contraceptive and non-contraceptive health benefits. [15]

It can be said that the emergence of the women’s rights movement of the late 1960s and 1970s is directly related to the availability of the pill and the control over fertility it enabled: It allowed women to make personal choices about life, family and work. [15]

The development of novel targeted anticancer agents, including antibody-drug conjugates or ADCs, have resulted in a new way of treating cancer and hematological malignancies with fewer adverse events, longer survival and better quality of life (QoL).
In the end, the economic impact of pharmaceutical agents, some hailed as true miracles, has been remarkable, contributing to our ability to cure and manage (human) disease and allowing people to live longer, healthier lives.
At the same time, the (clinical) use of (novel) medicinal or (bio) pharmaceutical agents and their underlying active ingredients can also harbor a number of risks for the environment.

6.0 Understanding environmental risk
In the development of novel therapeutic agents, intensive pre-clinical investigations yield a vast amount pharmacological and toxicological data. During the discovery and (early) development of therapeutic agents, researchers are paying close attention to target specificity and pathways to understand how an innovative drug compound may have beneficial efficacy in the treatment of human or veterinary diseases. Because adverse events are undesirable, drug developers often focus on therapeutics with a well-understood mechanism of action (MOA) and low toxicity (often measured in ng/L). [1]

As a result, only a small number of pharmaceuticals will be classified as highly and acutely toxic, requiring new approaches to identify pharmaceutical agents in robust environmental hazard and risk assessments. [16]

7.0 Pharmaceutical risk assessment
While non-medicinal and chemical entities produced in significant commercial quantities require an environmental risk assessment based on a minimum set of hazard data to assess and manage risks to humans and the environment, such an approach does not necessarily apply to (novel) therapeutic agents. One reason is that the health and wellbeing of humans should never be assessed and managed on the basis of risk alone. Regulators generally require drug developers or sponsors to undertake a comprehensive assessment of the potential risks and benefits of a proposed therapeutic agent, which may demonstrate significant risk to the patient. However, these risks are largely offset by the medicinal benefits of such agents.

Regulators around the world require a systematic and transparent assessment of the (potential) of environmental risk in addition to a (novel) medicinal agent’s quality, safety and efficacy, and relevance as part of regulatory decision-making. [17]

8.0 Environmental risk and regulatory requirements in the United States
The legal mandate of protecting the environment in the United States consists of the National Environmental Policy Act of 1969 (NEPA), which requires all federal agencies to assess the environmental impact of their actions and the impact on the environment, and the Federal Food, Drug and Cosmetic Act (FFDCA) of 1938 (amended in 1976).

This legal framework further determines that the regulation of pharmaceuticals in the environment is the responsibility of the United States Environmental Protection Agency or EPA and the United States Food and Drug Administration (FDA), which is required to consider the environmental impact of approving novel therapeutic agents and biologics applications as an integral part of the regulatory process.

The FDA has required environmental risk assessments for (novel) medicinal compounds, (bio) pharmaceutical agents and active pharmaceutical ingredients for veterinary use (since 1980) as well as the treatment of human diseases (since 1998).

As such, the FDA regulations in 21 CFR part 25 identify which Pharmaceutical Environmental Risk Assessment or PERA is required as part of a New Drug Application or NDA, abbreviated application, Investigational New Drug application or IND. [18]

The same regulations (21 CFR 25.30 or 25.31) identify categorical exclusions for a number of products and product categories – including vitamins, electrolytes, peptides, proteins, etc. – that do not require the preparation of an environmental risk assessment or ERA because, as a class, these agents, individually or cumulatively, do not significantly affect the quality of the (human) environment.

In addition, and in contrast to the categorical exclusion, these regulations also identify cases when such an exemption may be overruled as the result of a specific mode of action (MOA) involving endocrine disruption and modulation. [18]

9.0 Required ERA
Under the applicable regulations, NDAs, abbreviated applications and supplements to such applications do not qualify for a categorical exclusion if the FDA’s approval of the application results in an increased use of the active moiety or active pharmaceutical ingredient, as a result of higher dose levels, use of a longer duration, for a different indication than was previously approved, or if the medicinal agent or drug is a new molecular entity and the estimated concentration of the active therapeutic agent at the time of entry into the aquatic environment is expected to be 1 part per billion (ppb) or greater.

Furthermore, a categorical exclusion is not applicable when approval of an application results in a significantly altered concentration or distribution of a (novel) therapeutic agent, the active pharmaceutical ingredient, its metabolites or degradation products in the environment.

Regulations also refer to so-called extraordinary circumstances (stated in 21 CFR 25.21 and 40 CFR 1508.4) where a categorical exclusion does not exist. This may be the case when a specific product significantly affects the quality of the (human) environment and the available data establishes that there is a potential for serious harm. Such environmental harm may go beyond toxicity and may include lasting effects on ecological community dynamics. Hence, it includes adverse effects on species included in the United States Endangered Species Act (ESA) as well as other federal laws and international treaties to which the United States is a party. In these cases, considered extraordinary circumstances, an environmental risk assessment is required unless there are specific exemptions relating to the active pharmaceutical ingredient.

10.0 Naturally Occurring Substances
Based on the current regulations, a drug or biologic may be considered to be a “naturally occurring” substance if it comes from a natural source or is the result of a biological process. This applies even if such a product is chemically synthesized. The regulators consider the form in which an active ingredient or active pharmaceutical agent exists in the environment to determine if a medicinal compound or biologic is a naturally occurring substance. Biological and (bio) pharmaceutical compounds are also evaluated in this way.

According to the Guidance for Industry, a protein or DNA containing naturally occurring amino acids or nucleosides with a sequence different from that of a naturally occurring substance will, after consideration of metabolism, generally qualify as a naturally occurring substance. The same principle applies to synthetic peptides and oligonucleotides as well as living and dead cells and organisms. [18]

11.0 Preparing an Environmental Risk Assessment
If an environmental risk assessment is required, the FDA requires drug developers and/or sponsors to focus on characterizing the fate and effects of the active pharmaceutical ingredient in the environment as laid out in the Guidance for Industry, Environmental Assessment of Human Drugs and Biologics Applications (1998). [18]

This is generally the case if the estimated concentration of the active pharmaceutical ingredient being considered reaches, at the point of entry into the aquatic environment, a concentration ≥1 PPB; significantly alters the concentration or distribution of a naturally occurring substance, its metabolites or degradation products in the environment; or, based on available data, it can be expected that an increase of the level of exposure may, potentially, lead to serious harm to the environment. [18]

To guarantee that satisfactory information is available, the 1998 Guidance for Industry lays out a tiered approach for toxicity testing to be included in an environmental risk assessment. [Figure I] [18]

Furthermore, if potential adverse environmental impacts are identified, the environmental risk assessment should, in accordance with 21 CFR 25.40(a), include a discussion of reasonable alternatives designed to offer less environmental risk or mitigating actions that lower the environmental risk.

Figure 1: Tiered Approach to Fate and Effect Testing (USA) [18]
12.0 A Tiered Approach
The fate and effects testing is based on a tiered approach:

12.1 Tier 1
This step does not require acute ecotoxicity testing to be performed if the EC50 or LC50 divided by the maximum expected environmental concentration (MEEC) is ≥1,000, unless sublethal effects are observed at the MEEC. If sublethal effects are observed, chronic testing as indicated in tier 3 is required. [18]

12.2 Tier 2
In this step, acute ecotoxicity testing is required to be performed on a minimum of aquatic and/or terrestrial organisms. In this phase, acute ecotoxicity testing includes a fish acute toxicity test, an aquatic invertebrate acute toxicity test and analgal species bioassay.

Similar to tier 1, tier 2 does not require acute ecotoxicity testing to be performed if the EC50 or LC50 for the most sensitive organisms included in the base test, divided by the maximum expected environmental concentration (MEEC) is, in this tier, ≥100, unless sublethal effects are observed at the MEEC. However, as in the case of tier 1, if sublethal effects are observed, chronic testing as indicated in tier 3 is required. [18]

12.3 Tier 3
This tier requires chronic toxicity testing if the active pharmaceutical ingredient has the potential to bioaccumulate or bioconcentrate, or if such testing is required based on tier 1 or tier 2 test results. [18]

13.0 Bioaccumulation and Bioconcentration
Bioaccumulation and bioconcentration are complex and dynamic processes depending on the availability, persistence and physical/chemical properties of an active pharmaceutical ingredient in the environment. [18]

Bioaccumulation and bioconcentration refer to an increase in the concentration of the active pharmaceutical ingredient in a biological organism over time, compared with the concentration in the environment. In general, compounds accumulate in living organisms any time they are taken up and stored faster than they are metabolized or excreted. The understanding of this dynamic process is of key importance in protecting human beings and other organisms from the adverse effects of exposure to a (novel) medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient, and it is a critical consideration in the regulatory process. [21]

According to the definition in the Guidance for Industry, active pharmaceutical ingredients are generally not very lipophilic and are, in comparison to industrial chemicals, produced in relatively low quantities. Furthermore, the majority of active pharmaceutical ingredients generally metabolize to Slow Reacting Substances or SRSs that are more polar, less toxic and less pharmaceutically active than the original parent compound. This suggests a low potential for bioaccumulation or bioconcentration. [18]

Following a proper understanding of this process, tier 3 chronic toxicity testing is required if an active pharmaceutical ingredient has the potential to bioaccumulate or bioconcentrate. A primary indicator is the octanol/water partition coefficient (Kow). If, for example, the logarithm of the octanol/water partition coefficient (Kow) is high, the active pharmaceutical ingredient tends to be lipophilic. If the coefficient is ≥3.5 under relevant environmental conditions, such as a pH of 7, chronic toxicity testing is required.

Tier 3 does not require further testing if the EC50 or LC50 divided by the maximum expected environmental concentration (MEEC) is ≥10, unless sublethal effects are observed at the MEEC.

In accordance with the Guidance for Industry, a drug developer or sponsor should include a summary discussion of the environmental fate and effect of the active pharmaceutical ingredient in an environmental risk assessment. The environmental risk assessment should also include a discussion of the affected aquatic, terrestrial or atmospheric environments. [18]

14.0 Special Consideration: Environmental Impact Statement
Following the filing of an environmental risk assessment for gene therapies, vectored vaccines and related recombinant viral or microbial products, the FDA will evaluate the information and, based on the submitted data, determine whether the proposed (novel) medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient may significantly affect the environment and if an Environmental Impact Statement (EIS) is required. According to 21 CFR 25.52, if an EIS is required, it will be available at the time the product is approved. Furthermore, if required, an EIS includes, according to 40 CFR 1502.1, a fair discussion of the environmental impact as well as information to help decision-makers and the public find reasonable alternatives that help in avoiding or minimizing adverse impacts or enhance environmental quality. [19]

However, if the FDA determines that an EIS is not required, a Finding of No Significant Impact (FONSI) will, according to 21 CFR 25.41(a), explain why this is not required. This statement will include either the environmental risk assessment or a summary as well as reference to underlying documents supporting the decision. [19]

15.0 European requirements
In Europe, environmental risk assessments were, in accordance EU Directive 92/18/EEC and the corresponding note for guidance issued by the European Medicines Agency (EMA), first required for (novel) medicinal agents for veterinary use in 1998. The requirement for an environmental risk assessment for (novel) medicinal agents, (bio) pharmaceuticals and active pharmaceutical ingredients for the treatment of human disease was first described in 2001 in Directive 2001/83/EC.

Subsequent to an initial guiding document published in January 2005, the European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP) issued its final guidance for the assessment of environmental risk of medicinal products for human use in 2006. [20]

After the discovery of pharmaceutical residues and trace contaminants in the environment, regulators in the European Union require that an application for marketing authorization of a (novel) medicinal or (bio) pharmaceutical agent is accompanied by an environmental risk assessment.

This requirement is spelled out in the revised European Framework Directive relating to medicinal products for human use. It applies for new registrations as well as repeat registrations for the same medicinal agent if the approval of such an extension or application leads to the risk of increased environmental exposure.

In Europe, the objective of the environmental risk assessment is to evaluate, in a step-wise, phased procedure, and as part of the Centralized Procedure by the European Medicines Agency’s Committee for Medicinal Products for Human Use (CHMP), the potential environmental risk of (novel) medicinal compounds, (bio) pharmaceutical agents and/or active pharmaceutical ingredients. Such an assessment will be executed on a case-by-case basis.

16.0 Phase I
In this process, Phase I estimates the exposure of the environment to the drug substance and is only focused on the active pharmaceutical ingredient or drug substance/active moiety, irrespective of the intended route of administration, pharmaceutical form, metabolism and excretion.
This phase excludes amino acids, proteins, peptides, carbohydrates, lipids, electrolytes, vaccines and herbal medicines, because regulators believe that these biologically derived products are unlikely to present a significant risk to the environment. [21]
The exemption for these biologically derived biopharmaceuticals is generally interpreted as an exemption for all biopharmaceutical agents manufactured via live organisms and that have an active ingredient that is biological in nature. [21]

Yet, not all biologically derived biopharmaceuticals are (easily) biodegradable, and scientists have detected modified natural products, including plasmids, in the environment. Furthermore, some protein structures, including prions, are very environmentally stable and resistant to degradation, allowing them to persist in the environment. [22] Hence, this approach requires future scientific justification.
In Phase I, following the directions included in the European Chemicals Bureau (2003) Technical Guidance Document, an active pharmaceutical ingredient or drug substance/active moiety with a logKow >4.5 requires further screening for persistence, bioaccumulation and toxicity, or a PBT assessment.

For example, based on the OSPAR Convention and REACH Technical Guidance, highly lipophilic agents and endocrine disruptors are referred to PBT assessments.
Phase I also includes the calculation of the Predicted Environmental Concentration or PEC of active pharmaceutical ingredients, which, in this phase, is restricted to the aquatic environment, and a so-called “action limit” requiring additional screening.
The “action limit” threshold for the PEC in surface water (PECsurface water), for example, is calculated by using the daily dose of an active pharmaceutical ingredient, the default values for wastewater production per capita, and the estimated sale and/or distribution of the active pharmaceutical ingredient if there is evidence of metabolism and no biodegradation or retention following sewage treatment is observed.

17.0 Phase II
Phase II, divided into two parts, tier A and tier B, assesses the fate and effects of novel medicinal compounds, (bio) pharmaceutical agents or active pharmaceutical ingredients in the environment.

Following the assessment of the PEC/PNEC ratio based on relevant environmental fate and effects data (Phase IIA), further testing may be needed to refine PEC and PNEC values in phase II tier B. A PEC/PNEC ratio of This process helps regulators to evaluate potential adverse effects independently of the benefit of the (novel) medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient, or the direct or indirect impact on the environment.


Stage in regulatory evaluation Stage in risk assessment Objective Method TEST / DATA REQUIREMENT
Phase I Pre-screening Estimation of exposure Action limit Consumption data, logKow
Phase II Tier A Screening Initial prediction of risk Risk assessment Base set aquatic toxicology and fate
Phase II Tier B Extended Substance and compartment-specific refinement and risk assessment Risk assessment Extended data set on emission, fate and effects

 

Table 1: The Phased Approach in Environmental Risk Assessment in Europe


18.0 Outcome of fate and effects analysis
In all cases, the medicinal benefit for patients has relative precedence over environmental risks. This means that even in the case of an unacceptable (residual) environmental risk caused by a novel medicinal compound, pharmaceutical agent or active pharmaceutical ingredient, after third-tier considerations, prohibition of a new active pharmaceutical ingredient is not taken into consideration.

If European regulators determine that the possibility of environmental risk cannot be excluded, mitigating, precautionary and safety measures may require the development of specific labeling designed to address the potential risk, as well as adding adequate information in the Summary of Product Characteristics (SPC), Package Leaflet (PL) for patient use, product storage and disposal. The information on the label, SPC and PL should also include information on how to minimize the discharge of the product into the environment and how to deal with disposal of unused product, such as in the case of shelf-life expiration.

In extreme cases, a recommendation may be included for restricted in-hospital or in-surgery administration under supervision only, a recommendation for environmental analytical monitoring, or a requirement for ecological field studies. [20] [23]

19.0 Combined effects
Often overlooked by regulators is the fact that the regulatory frameworks such as the European REACH Regulation, the Water Framework Directive (WFD) and the Marine Strategy Framework Directive (MSFD) mainly focus on toxicity assessment of individual chemicals or active pharmaceutical ingredients.

This poses a problem for the proper execution of environmental risk assessments and regulation because the effect of contaminant mixtures with multiple chemical agents and active pharmaceutical ingredients, irregardless of their source, is a matter of growing, and recognized, scientific concern. [24]

To solve this problem, scientists are working on experimental, modeling and predictive environmental risk assessment approaches using combined effect data, the involvement of biomarkers to characterize Mode of Action, and toxicity pathways and efforts to identify relevant risk scenarios related to combined effects of pharmaceutical residues, trace contaminants as well as non-medicinal (industrial) chemicals. [24]

20.0 International harmonization
Created in the 1990s, the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) was set up as an agreement between the European Union, the United States and Japan to harmonize different regional and national requirements for registering pharmaceutical agents in order to reduce the need to duplicate testing during the research and development phase of (novel) medicinal compounds, (bio) pharmaceutical agents and active pharmaceutical ingredients. However, to date, and partly as a result of the overlying differences in regulations and directives, environmental risk assessments have, so far, not been included in the harmonization procedures. [25]

In contrast, the International Cooperation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products, or VICH, similar to the ICH a trilateral set up in 1996 between the European Union, the Unites States and Japan, does include the assessment of ecotoxicity and the evaluation of environmental impact of veterinary medicinal products.

The VICH guideline, intended to provide a common basis for an Environmental Impact Assessment or EIA, offers guidance for the use of a single set of environmental fate and toxicity data and is designed to guide scientists to secure the type of information needed to protect the environment. The guideline, published in 2004 and recommended for implementation in 2005, was developed as a scientifically objective tool to help scientists and regulators extract the maximum amount of information from studies to achieve an understanding of the potential (risk) of specific Veterinary Medicinal Products to the environment. [26]

21.0 Impact of Environmental Risk Assessment
Although an environmental risk assessment is part of the regulatory approval and marketing authorization process in both the United States and Europe, the actual impact can be different.

In Europe, an adverse environmental risk assessment for (novel) medical compounds, (bio) pharmaceutical agents or active pharmaceutical ingredients for human use does not impact or influence the marketing approval application. EU Directive 2004/27/EC/Paragraph 18 stipulates that the environmental impact should be assessed and, on a case-by-case basis, specific arrangements to limit it should be envisaged. In any event, the impact should not lead to refusal of a marketing authorization.

However, a parallel directive pertaining to veterinary medicine, as laid out in EU Directive 2009/9/EC, stipulates that, in the case of veterinary medicine, an environmental impact assessment should be conducted to assess the potential harmful effects and the kind of harm the use of such a product may cause to the environment, as well as to identify any precautionary measures that may be necessary to reduce such risk.

Furthermore, the directive requires that, in the case of live vaccine strains which may be zoonotic, the risk to humans also needs to be assessed. In the case of veterinary medicine, an environmental impact assessment is part of the overall risk-benefit assessment, and, in the case of a negative result, may potentially lead to a refusal to approve the medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient.

In the United States, the FDA has eliminated environmental assessment requirements for certain types of veterinary drugs when they are not expected to significantly affect the environment. However, a negative assessment, based on unacceptable risk to “food” or “non-food” animals, can result in a refusal of a New Animal Drug Application (NADA) or a Supplemental New Animal Drug Application (SNADA). [26]

22.0 Conclusion
The central questions in the development of (novel) medicinal compounds, (bio) pharmaceutical products or active pharmaceutical ingredients for the treatment of human and veterinary disease is whether a novel agent will have an effect on the environment.

Regulators around the world, including in the United States and Europe, follow different assessment methodologies to ascertain these risks. However, all regulators use fate, exposure and effects data to help them understand if a (novel) medicinal compound, (bio) pharmaceutical agent or active pharmaceutical ingredient harbors a potential environmental risk, causing potential harmful effects on the ecosystem, and how this impacts human and veterinary health.

In all cases, environmental risk assessments are carried out based on scientifically sound premises, relying on established, accepted and universally known facts.

Overall, environmental risk assessments are useful analytical tools, providing critical information contributing to public health, as well as key instruments in guiding environmental policy decision-making.

As such, they play a key role in building a better, healthier world.


August 3, 2017 | Corresponding Authors: Duane Huggett, Ph.D | DOI: 10.14229/jadc.2017.29.08.001

Received: February 24, 2017 | Accepted for Publication: April 28, 2017 | Published online Augu 3, 2017 |

Last Editorial Review: August 3, 2017

Featured Image: Medical research laboratory with scientist using pipette. Courtesy: © Fotolia. Used with permission.

Creative Commons License

This work is published by InPress Media Group, LLC (Environmental Risk Assessment and New Drug Development) and is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Non-commercial uses of the work are permitted without any further permission from InPress Media Group, LLC, provided the work is properly attributed. Permissions beyond the scope of this license may be available at adcreview.com/about-us/permission.


Copyright © 2017 InPress Media Group. All rights reserved. Republication or redistribution of InPress Media Group content, including by framing or similar means, is expressly prohibited without the prior written consent of InPress Media Group. InPress Media Group shall not be liable for any errors or delays in the content, or for any actions taken in reliance thereon. ADC Review / Journal of Antibody-drug Conjugates is a registered trademarks and trademarks of InPress Media Group around the world.

The post Environmental Risk Assessment and New Drug Development appeared first on ADC Review.

Viewing all 72 articles
Browse latest View live