The challenges of managing HIV infection include difficulties with direct detection of the virus, a prolonged asymptomatic period of infection, and decisions regarding the initiation of therapy and regimen changes in response to antiretroviral drug resistance or intolerance. All of these rely on laboratory testing to guide the clinician.
There are specific and unique laboratory techniques that are used with HIV infection that medical providers should be aware of, in terms of availability, utility, and limitations.
Diagnosis is typically made by a combination of screening and confirmatory serologic tests.
During therapy, it is crucial to monitor the response, both in terms of immune reconstitution and viral replication, as both have different implications for the patient and neither may be evidenced by any outward signs or symptoms. Treatment failures may be due to the development of drug resistance mutations in the virus; some mutations may result in resistance to multiple antiretroviral agents or even to entire classes of drugs. Special tests include resistance testing by genotypic or phenotypic tests, pre-treatment tests for HLA-B*5701 in the context of abacavir, and viral tropism testing in the case of maraviroc.
The laboratory can assist with all of these issues, from diagnosis to drug-resistance testing.
For other discussions on management of HIV infection, see HIV Disease, Pediatric HIV Infection, and Antiretroviral Therapy for HIV Infection.
A key feature of HIV infection is the prolonged clinical latency that occurs prior to significant immune deficiency. [1, 2, 3, 4, 5] During this period (which can last several years from the time of initial infection), the individual may exhibit few or no symptoms but is still able to spread the infection to others and may be at risk of life-threatening opportunistic infections. Early detection is important for minimizing those risks.
In general, detection of HIV infection is indirect, through the use of specific serologic tests for antibodies to HIV proteins. Because HIV is a lifelong infection, positive serology is diagnostic of current rather than cleared infection. Some direct methods do exist, but they have limitations and their role is limited to specific situations.
One of the difficulties in diagnosing HIV is the so-called “window period” immediately after the acquisition of infection, during which serology may be negative. In studies of occupationally exposed health care workers, seroconversion typically occurs within 2 months of exposure; rarely, seroconversion takes longer than 6 months.  This period typically lasts from 3 weeks to 3 months. Fortunately, non-serologic tests can be employed to cover this period. 
The primary diagnostic method is a combination of a sensitive screening assay for anti-HIV antibodies with a follow-up confirmatory test. An enzyme-linked immunosorbent assay (ELISA) or enzyme immunoassay (EIA) is a sensitive technique designed to detect antibodies to specific HIV proteins (eg, p24 capsid antigen, gp120 envelope glycoprotein).
In older assays, HIV proteins were bound to a surface such as a 96-well plate, and antibody binding took place in the well. Secondary detection of the bound antibody, using anti-human antibodies linked to an enzyme, would result in a colorimetric change that could be measured (see the image below).
More recently, less labor-intensive tests using solid-phase media similar to those in over-the-counter pregnancy test kits have resulted in a plethora of testing modalities. Rapid testing can currently be performed on whole blood, serum, urine, or saliva, depending on the test kit employed. [7, 8] Several tests have been "CLIA (Clinical Laboratory Improvement Amendments) waived" by the US Food and Drug Administration (FDA), meaning that no detailed training or laboratory validation is required to use these tests for clinical purposes.
High sensitivity comes at the price of lower specificity, although most modern screening tests for HIV demonstrate both excellent sensitivity and specificity. There are several known causes of false-positive screening results for HIV, and because a diagnosis of HIV infection has important consequences, it is essential that all patients with positive or indeterminate screening results undergo secondary testing using a different test modality.
Causes of false-positive HIV screening tests include the following:
Receipt of immune globulin
Confirmatory testing of samples with positive or indeterminate screening results is traditionally performed using a Western blot or similar test. Specific responses to particular HIV proteins are detected by binding serum antibodies to viral proteins attached to a solid substrate. This can either be a true Western blot of proteins that have undergone electrophoresis or some other membrane-bound form of the proteins, such as a line assay.
There are many variants of these tests. Some use purified viral proteins from cultured virus, while others may use recombinant proteins expressed from in vitro systems. Unbound antibodies are removed and those that are left bound are detected using secondary methods, similar to the principle of ELISA testing. The primary difference is that the particular proteins to which the patient has antibody responses can be determined. Negative, positive, or indeterminate results are interpreted based on the specific "banding pattern" of responses.
Because test kits may use different modalities and different antigens from viral proteins, interpretation of a test must be performed according to the criteria for that specific test. Interpretation of banding patterns that have not been validated for that test may result in incorrect results.
Antibody tests can distinguish between the different strains of HIV. HIV-1 and HIV-2 are the 2 main species, and HIV-1 is further subdivided into groups M, N, and O. Some early tests were unable to detect the O "outlier" group or HIV-2; it would be inappropriate to use a modern test that was unable to detect all known groups and type the infection to either HIV-1 or HIV-2.
Antibody tests are typically positive no sooner than 2 weeks after exposure, but seroconversion may take as long as several months. Thus, testing algorithms for exposed patients include baseline serology with repeat testing at 1, 3, and 6 months post exposure.
Diagnosis in newborns
Serology is not helpful in the context of diagnosing neonatal HIV infection, because maternal IgG transferred during the third trimester of pregnancy will cause positive ELISA and Western blot results in all infants born to HIV-infected mothers. In fact, this is sometimes exploited as part of newborn screening tests to detect HIV exposure.
DNA polymerase chain reaction testing
Confirmation of infection relies on more direct methods of detecting the virus. The most reliable method is DNA polymerase chain reaction (PCR) testing. As a retrovirus, HIV has an RNA genome that is reverse-transcribed into a DNA intermediate stage. This DNA provirus is integrated into the host cellular genome, and from it all viral messenger RNA and viral genome RNA is transcribed. Proviral DNA can be detected in infected peripheral blood mononuclear calls using PCR amplification with HIV-specific primers.
The use of proviral DNA PCR is generally limited to the specific situation of ruling out HIV infection in neonates. Initial testing should be performed as soon as possible after birth, and because transmission typically occurs at the time of delivery, repeated negative results are required to rule out infection.
Optimally, exclusion of infection requires at least 2 further negative DNA PCR tests, one at 1 month of age and a second at 4 months. Including testing at birth, most neonates will therefore have at least 3 tests performed.
Final confirmation of HIV status (either infected or uninfected) requires serologic tests. Negative antibody tests after 6 months of age can be used as proof of lack of infection, but maternal antibody to HIV can persist beyond 12 months of age, so definitive proof may be delayed, even with negative DNA PCR results. Almost all infants will have lost their maternal antibody by 18 months of age.
RNA polymerase chain reaction testing
Detection of viral RNA genomes through reverse-transcription PCR (RT-PCR) is an acceptable alternative for neonatal diagnosis, but it has some limitations. RNA viral testing in adults has been associated with false-positive results that have resulted in incorrect diagnoses. Conversely, if the infant is on antiretroviral therapy (most will be receiving zidovudine [AZT] monotherapy at least), viral replication may be inhibited, leading to negative RNA tests. DNA tests measure infected cells, and are therefore relatively unaffected by antiretroviral therapy.
One setting where RNA testing may be useful is in diagnosing very early HIV infection. Antibodies may take several weeks to appear after acquisition of infection, so a person may be infected (and therefore infectious) but have negative serologic tests (ie, the "window period"). The level of HIV RNA in the peripheral blood (ie, the viral load) is very high during these initial weeks.
RNA testing is discouraged for routine care; serologic tests several weeks after a possible exposure are preferred. However, qualitative RNA testing is approved for use in screening blood donations in combination with serologic tests.
P24 antigen is the 24 kilodalton (kD) peptide product cleaved from the center of the Gag polyprotein. It makes up the capsid of the viral particle, and is also known as the capsid antigen or CA. Historically, p24 antigenemia was found to be present in patients with AIDS and those with asymptomatic HIV infection.
As a single test, the p24 antigen assay lacks sensitivity, but the assay has enjoyed a recent resurgence as part of combination HIV screening tests, alongside ELISA for antibodies. In theory, p24 antigenemia could be used to detect infected individuals during the window period before seroconversion, as the p24 level is relatively high during the time before the specific immune response can control HIV replication.
The use of a combination HIV antigen/antibody screening assay was found to be superior in sensitivity and specificity for detecting early infections compared to an HIV RNA nucleic acid test.  However, an analysis of healthcare costs suggested that the RNA-detection assay would result in more health care savings in the longer term.
Viral culture is possible but is labor intensive, requires specialist laboratory equipment and training (biosafety level 3 containment facilities) and may take several days to weeks to get results. It has no place in routine clinical diagnostics.
Before the first HIV tests were developed, many patients presented relatively late in infection and, as a consequence, progressed rapidly to death. Because of this, it was initially feared that HIV infection was generally a rapidly progressive disease. Together with the thought that aggressive antiretroviral therapy might prevent loss of immune responses, the concept of "hit hard, hit early" was born. Patients were started on therapy as soon as a diagnosis was made.
This approach proved problematic with single-agent monotherapy, however, as drug resistance would inevitably arise within a few months. Even when effective combination therapy was developed, the side effects of the medications and demanding dosing regimens made compliance difficult. Patients who were asymptomatically infected often found their quality of life severely affected by starting therapy, and drug resistance could still occur, making treatment later on much more difficult.
Over time, several studies led to the conclusion that a balance has to be struck between the hazards of delayed initiation of therapy and those of premature initiation. Current guidelines for treatment initiation rely on 3 measures: the patient’s clinical status (eg, past or current opportunistic infections [OIs], age), CD4+ T cell count, and HIV peripheral viral load. 
Before starting antiretroviral therapy, certain laboratory tests are indicated if particular agents are being considered.
Abacavir is a nucleoside analogue reverse transcriptase inhibitor that is associated with severe hypersensitivity reactions in 2-9% of patients, with those positive for HLA-B*5701 at the greatest risk. The reactions can be lethal. Thus, specific genetic testing for HLA-B*5701 should be performed prior to starting therapy with this drug.
HIV requires an additional co-receptor for cell entry in addition to CD4. The co-receptor that HIV uses depends on the cell type: for CD4+ T cells, it is CXCR4; for macrophages, it is CCR5. Early in infection, HIV almost exclusively uses the CCR5 co-receptor, but in many individuals there is a shift to using alternative receptors such as CXCR4. This shift has important implications for choosing antiretroviral regimens.
Maraviroc is a fusion inhibitor that blocks binding to the CCR5 co-receptor. Prior to starting therapy with maraviroc, viral tropism testing should be performed to confirm that CXCR4-trophic or dual-trophic strains are not present, as these would be quickly selected for, rendering treatment useless. In contrast, the fusion inhibitor enfuvirtide acts on the gp41 viral protein. This drug is active against CCR5, CXCR4, and dual-trophic strains of HIV-1. 
HIV has a particular tropism for cells with the CD4 protein on their surface, which it binds to with the gp120 envelope glycoprotein. Cells that have CD4 on their surface include macrophages, glial cells in the brain, and T-helper and T-regulatory cells. Macrophages are the site of a long-lived HIV proviral reservoir that makes eradication practically impossible. Loss of glial cells is thought to be related to AIDS-encephalopathy. 
CD4+ T cells are lost through a number of mechanisms. HIV itself is directly cytotoxic to T cells, but cell death also occurs as part of the natural immune response and cell activation that occurs with any chronic infection. In the case of HIV, however, it appears as if T-cell replacement from the thymus is adversely affected, so an increased T-cell turnover is not compensated for by a concomitant increase in production.
Long before these mechanisms were fully understood, it was clear that AIDS was characterized by a specific loss of CD4+ T cells. CD8+ T cells were relatively spared, leading to a reversal of the usual CD4/CD8 ratio.
As a result of a loss of CD4+ T cells, several areas of the immune system are affected. Because of a lack of T-cell help, both cellular CD8 responses and humoral antibody responses are less effective. CD8+ cytotoxic T cells are less likely to respond to antigens, both ones that had previously produced positive responses and new targets.
B cells, on the other hand, tend to overproduce IgG, leading to hypergammaglobulinemia, but the specificity of this antibody is poor, and overall this leads to impairment in antibody-mediated protection. Eventually, immune dysfunction progresses to the point that life-threatening OIs can occur.
CD4+ T cells can be readily measured by flow cytometry, using specific antibodies that label CD4 as well as other T cell markers such as CD3. Typically, CD4 counts are performed as part of a complete lymphocyte subset analysis, giving both percentages and absolute numbers of CD4+ and CD8+ T cells, B cells, and natural killer (NK) cells. The magnitude of discordance between absolute CD4+ T-cell numbers and CD4+ T-cell percentages is greatest in those HIV patients co-infected with active hepatitis C virus and more advanced liver disease. 
In 1993, the Centers for Disease Control and Prevention (CDC) added a CD4+ T-cell count below 200 cells/mL as a specific surveillance definition of AIDS. Prior to 1993, AIDS could be diagnosed only when an OI occurred in a patient infected with HIV. This change gave a much more accurate representation of those people with the highest risk of OIs and death. Early treatment guidelines used this level as the point at which treatment should be initiated, but several observations (and later, formal clinical trials) prompted a shift in the cutoff.
Although no clear benefit could be seen in patients who started treatment at CD4+ T-cell counts of 500/mL or above, those who wait until their counts are below 200/mL are less likely to have significant improvement in CD4 cell numbers, and they have reduced reconstitution of their immune repertoire (ie, the antigens to which their immune system could respond). Pretreatment CD4 count may have a prognostic role in predicting risk of death even after the initiation of antiretroviral therapy. 
Formal studies of treatment initiation at various CD4+ T-cell counts have shown that patients who start treatment when their count drops below 350/mL experience less morbidity and mortality than those who wait until their count drops below 200/mL. [17, 18, 19] Current guidelines reflect this finding.
In children, particularly infants, the absolute CD4+ T-cell count may not accurately reflect their current risk of immune dysfunction, so the percentage is used instead. When CD4+ T cells constitute less than 25% of total lymphocytes, relative deficiency is diagnosed. Absolute counts as high as 1000 cells/mL in children younger than 5 years may be an indication to start treatment. In very young infants infected perinatally, treatment should be started regardless of the CD4+ T-cell count. [20, 21] In adults with chronic hepatitis C and low absolute CD4+ T cells, the CD4 percentage may also be more useful due to probable T-cell sequestration in the liver. 
Because CD4+ T-cell counts decline over time in untreated individuals and may naturally vary from time to time, regular repeated testing is required to accurately assess the immune status of any given patient. Typically, it is recommended that CD4+ T-cell counts be monitored every 3-12 months, depending on clinical status, to help with the timing of treatment initiation or to provide alerts for possible drug resistance or noncompliance with treatment. However, viral load provides a better measure of resistance and noncompliance.
After starting therapy, there are often times when there is a need to alter the drugs used in the specific regimen, due to either clinical failure or an intolerance of side effects. Routine monitoring of viral load can help guide therapy before clinical failure occurs and can also assist with changing the antiretroviral regimen to address resistance when it occurs.
HIV replication occurs primarily in lymphoid tissue, but high levels of viremia can be detected in infected individuals. Early studies made it clear that the level of viremia at the "set point" (ie, after acute infection, but prior to significant immune decline) was associated with the rate of CD4+ T-cell loss and hence the time to AIDS and death. [14, 15] One analogy is that if the CD4+ T-cell count can be thought of as how far a train is from the end of the line, the viral load is a measure of how fast the train is going.
Viral load can be considered a quantitative measure of viral RNA genomes in the peripheral circulation. The most common techniques involve some form of reverse transcription to convert viral RNA (which is readily destroyed by ubiquitous RNAses) to more easily manipulated DNA.
After reverse transcription, the laboratory uses one of various amplification techniques that have been developed to provide a quantitative measure of the DNA.  Most rely on some form of nucleic acid amplification such as polymerase chain reaction (PCR). Internal standards of amplification targets of known concentration are compared to patient specimens in order to provide quantitation of viral load.
Because nucleic acid amplification techniques are incredibly sensitive, it is crucial to avoid contamination of specimens in the laboratory. Not only is the specific quantitative analysis of viral load important in clinical management, but qualitative viral load tests are used in screening of blood donations. Incorrect diagnoses of HIV infection have been made because of false-positive viral load assays (strictly speaking, these reflect an error on the part of the physician in ordering viral load testing instead of serology).
Very high viral loads are associated with rapid progression to AIDS, and in the past, treatment initiation guidelines included viral load as part of the decision-making criteria. The most recent guidelines focus on the CD4+ T-cell count as the preferred marker to assess treatment initiation, but viral load should be measured at baseline to assess response to treatment.
An optimal response to treatment would be a reduction in the viral load to undetectable levels (for most assays, this equates to levels of less than 50 copies/mL, but some assays may be able to go lower). An occasional "blip" may occur; these are typically below 200 copies/mL, although false-positive quantitative RNA viral load tests can occur at counts under 1000/mL. For the purpose of monitoring therapy, the Panel on Antiretroviral Guidelines for Adults and Adolescents defines virologic failure as a confirmed viral load above 200 copies/mL. 
HIV rapidly develops resistance to antiretrovirals when they are used as single agents, but resistance can occur even with combination therapy. In addition, transmission of HIV from an individual who is already receiving therapy might result in drug-resistant virus being present even in treatment-naive individuals. For this reason, many experts recommend performing resistance testing before starting therapy. Nevertheless, the largest use of these tests is in the management of patients with established HIV disease.
There are 2 main methods to detect antiretroviral resistance in HIV: genotypic and phenotypic testing. Both are usually performed in specialist laboratories.
Genotypic testing relies on the fact that many drug-resistance mutations have been well described and characterized from in vitro work or patient specimens.  Amplification and sequencing of target areas of the genome can provide rapid detection of probable resistance mutations to several antiretroviral medications simultaneously.
The main limitations of genotypic testing are that sufficient virus must be present for testing (low viral load tends to render this testing insensitive but, arguably, also redundant) and that only those mutations known to confer drug resistance can be tested for.
In addition, the presence of low-level resistance might be missed in a mixed population of viral pseudospecies, a situation that often exists in persons infected with HIV. In general, this subpopulation would be expected to expand in the face of pressure from treatment, and later testing should detect it.
Phenotypic testing is performed with actual viral culture, testing the virus’s ability to replicate in the presence of antiretroviral drugs. It too relies on sufficient virus being present to test and is limited in that it is more labor intensive. It may, however, be better at detecting mixed populations of resistant and sensitive pseudospecies.