From Casetext: Smarter Legal Research

Oracle Am., Inc. v. Google Inc.

UNITED STATES DISTRICT COURT FOR THE NORTHERN DISTRICT OF CALIFORNIA
May 2, 2016
No. C 10-03561 WHA (N.D. Cal. May. 2, 2016)

Opinion

No. C 10-03561 WHA

05-02-2016

ORACLE AMERICA, INC., Plaintiff, v. GOOGLE INC., Defendant.


MEMORANDUM OPINION RE ORACLE'S MOTION IN LIMINE NO. 5 TO EXCLUDE TESTIMONY OF GOOGLE'S SURVEY EXPERT DR. ITAMAR SIMONSON

INTRODUCTION

In this copyright infringement action involving Java and Android, plaintiff moves to exclude the survey and opinion of defense expert Dr. Itamar Simonson. The final pretrial order held that Google could offer Simonson's testimony subject to the following limitations. Simonson must make clear that his survey was directed at the factors that developers consider in general when determining which platform to develop for, and he may not offer any conclusion about whether that general proposition is specifically applicable to 2007-08. Simonson may not opine about the meaning that survey respondents attributed to the ambiguous and overlapping terms "popularity," "established user base," or "market demand." Simonson must adjust his testimony to reflect only the conclusions in his survey without the inclusion of pre-testing results.

This memorandum opinion explains the reasoning for that ruling.

STATEMENT

Dr. Itamar Simonson conducted a survey "to assess the key drivers of application developers' decisions whether to develop applications for a mobile platform" (Simonson Rpt. ¶ 10). He identified four conclusions based on the survey. First, expected demand and profitability are "by far" the most important factors considered by developers. Second, prior familiarity with a programming language is, "at most, a minor consideration for the overwhelming majority of application developers." Third, "[t]he great majority of application developers" are confident they can learn new programming languages to meet user demand for applications. Fourth, the fact that iOS application developers were willing to learn new languages provides "further evidence" that economic considerations are more important than prior familiarity with a programming language (id. ¶ 12). Google proffers Simonson's survey to rebut Oracle's claim for disgorgement of Google's profits from Android by suggesting that familiarity with Java did not in fact motivate developers to develop for Android (thus minimizing the importance of the declaring code and SSO of the 37 API packages at issue).

Simonson began with a list of over 5,500 developers, from which he randomly selected 152 to survey. The respondents were interviewed by phone using the Computer-Assisted-Telephone Interviewing technique. To actually participate in the survey, respondents had to meet four initial screening criteria. First, they had to develop applications for smartphones or tablets. Second, they had to "make or influence" decisions on "whether to develop new applications." Third, they had to develop applications for at least one of four major mobile platforms. Fourth, neither they nor members of their household could work for a market research firm, advertising agency, or public relations firm (id. ¶¶ 18, 20, 24).

Simonson pre-tested his questionnaire with twenty-three respondents (id. ¶ 22). Based on pretest results, he made two changes. First, he added a question: "In general, do you make decisions about which applications to develop independently, or as part of a team of application developers?" (id. ¶ 22, Exh. E). Second, he rephrased part of a question from "Please rate your capability to develop and establish in the market a completely new programming language" to "Please rate your capability to develop and establish a completely new programming language in the market" because the pretest suggested some respondents misinterpreted the question (ibid.). We remain uninformed on what this misinterpretation was or how it may have affected pretest responses. Simonson included the pretest results in his final results.

The survey was administered by experienced interviewers from Target Research Group. The interviewers, research firm, respondents, and staff who coded respondents' open-ended answers were "blind" as to the study's purpose and the identity of its sponsor. Field Solutions, an independent research firm, conducted a validation survey, reached 149 of the 152 respondents, and discovered no discrepancies in the results (id. ¶¶ 14, 23).

ANALYSIS

An expert witness may provide opinion testimony "if (1) the testimony is based upon sufficient facts or data, (2) the testimony is the product of reliable principles and methods, and (3) the witness has applied the principles and methods reliably to the facts of the case." Fed. R. Evid. 702. District courts are charged with a "gatekeeping role" to ensure that expert testimony admitted into evidence is both reliable and relevant. Sundance, Inc. v. DeMonte Fabricating Ltd., 550 F.3d 1356, 1360 (Fed. Cir. 2008); see Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579, 589 (1993).

Oracle raises several objections to Simonson's survey. This memorandum addresses each in turn.

1. GOOGLE'S INTERNAL DOCUMENTS.

Oracle points out that evidence from Google's own internal documents indicates Google copied parts of Java APIs specifically to tap into the Java developer community, suggesting that Google believed prior familiarity with the programming language used was more attractive than the promise of profits to developers. Oracle claims this "completely" contradicts Simonson's conclusions, and that Simonson's survey is thus both irrelevant and unreliable. Not so.

The evidence Oracle cites might indicate that Google believed a familiar programming language would play a significant role in attracting developers. However, the question addressed by Simonson's survey was not whether Google believed prior familiarity with a programming language was an important consideration for developers, but whether developers thought of it as such. Thus, contrary to Oracle's assertion, evidence of Google's strategic predictions does not "completely" contradict Simonson's conclusions (even though it contradicts those conclusions in part). Oracle suggests discrepancies between the two must mean Simonson's conclusions are unreliable, but they could also simply indicate that Google's predictions of what motivated developers were wrong. Moreover, insofar as evidence of Google's strategic considerations tends to contradict Simonson's conclusions, such evidence speaks to the weight of his opinion, not its admissibility. See Daubert, 509 U.S. at 596 ("presentation of contrary evidence" is a "traditional and appropriate means of attacking shaky but admissible evidence").

The Court suspects that Simonson will have a hard time on cross explaining away Google's own contrary comments, but his survey cannot be excluded simply on that ground.

Oracle suggests that Simonson's potential to mislead the jury outweighs his probative value. Specifically, Oracle claims "Google would use that survey to trick jurors into rejecting Oracle's powerful evidence from Google of why Google copied" (Pl.'s Reply MIL No. 5 at 1). However, as noted above, Oracle's evidence from Google tends to show only what Google perceived and believed about developers' motivations. It is only one way of getting at the greater issue of whether and to what extent Google's copying of the declaring code and SSO (structure, sequence, and organization) of the 37 APIs at issue drove Android's success. Simonson's survey of developers is another way. It is not quite "trickery" for Google to present competing evidence against Oracle on a factual dispute at issue in this case. Arguments on how "powerful" or persuasive this competing evidence is must be directed to the jury.

Oracle also suggests Simonson's survey is irrelevant because it deals with only "one of several ways Oracle shows a causal nexus between Google's infringement and the Android-related profits," but this argument goes to the survey's weight, not its relevance or admissibility (see id. at 1-2). That fact that other evidence might also be relevant does not in and of itself undermine the survey's relevance. Oracle cites Dr. James Kearl, the court-appointed damages expert, for the proposition that Simonson's survey is irrelevant to the issue of damages (specifically, disgorgement of profits) because whether Google's copying in fact attracted developers is "a different question" from whether "Google thought it needed [Java] at launch" (id. at 3). However, Kearl also said that the jury would need to weigh the effect of any conclusion that consumer demand for Android attracted developers (rather than the converse). While Simonson's survey and Oracle's evidence from Google do present different questions, both are ultimately relevant to disputed facts at issue in this case.

2. SURVEY QUESTIONS.

Simonson's survey was administered in December of 2015 and January of 2016 (Simonson Rpt. ¶¶ 22-23). However, Android's launch period was in 2007-09. Oracle claims that in 2007-09, the applications market was in its infancy and no one knew if developing applications would be profitable; now, however, the market is well-established, so developers are more likely to invest in new platforms. Thus, Oracle argues, Simonson's 2015-16 survey fails to represent marketplace conditions in 2007-09.

Google admits Simonson operated on the premise that specific market conditions would not affect developers' decisions, so there was no need to recreate specific market conditions in his survey. Oracle contends, however, that specific market conditions do in fact affect developers' decisions. Oracle cites the report of Dr. Olivier Toubia, an expert retained by Oracle to analyze and respond to Simonson's survey, as support for its contention. Specifically, Oracle cites paragraphs 26-36 in Toubia's report for the proposition that the applications market today differs drastically from the market in 2007-09, such that developers today are more likely to invest in new platforms than they were in 2007-09. Toubia's report, however, does not support Oracle's claim. Some cited paragraphs broadly critique Simonson's failure to recreate or account for the specific 2007-09 historical context for his survey (Toubia Rpt. ¶¶ 26, 28, 36). Others generally assert that the applications market has undergone substantial changes since Android's launch (id. ¶¶ 29-30, 34). Still others suggest the phrasing of Simonson's questions could have been confusing or misleading to some respondents (id. ¶¶ 28, 35). Approximately half of the portion of Toubia's report cited by Oracle essentially parrots Oracle's argument that Google's copying of the declaring code and SSO of the 37 API packages was an important driver of Android's success (id. ¶¶ 27, 29, 31-34).

In short, nowhere does Toubia actually show, as Oracle claims, that developers today are more likely to invest in new platforms than they were in 2007-09. Oracle has thus presented no evidence for the proposition that developers' motivations are different today than they were in 2007-09. In other words, Oracle has not successfully challenged Simonson's premise that a survey of developers in 2015-16 is relevant to, and probative of, the question of what motivated developers in 2007-09.

Oracle cites Kwan Software Eng'g, Inc. v. Foray Techs., LLC, No. C 12-03762 SI, 2014 WL 572290, at *4-5 (N.D. Cal. Feb. 11, 2014) (Judge Susan Illston), for the proposition that failure to approximate actual marketplace conditions can provide grounds for inadmissibility. As discussed below, however, Kwan is distinguishable. Simonson's survey is probative of developers' motivations for developing for a new platform in general, though the weight accorded his conclusions may be diminished by contrary evidence that developers' motivations have changed with the market.

Oracle also points out that Android was not yet popular with users in 2007-09, although Simonson's survey found a platform's popularity was the most important factor for developers. Thus, Oracle contends, Simonson's survey and conclusions should be excluded for failing to address the question he purports to answer. Not so.

Oracle bases this argument on paragraphs 40, 47, and 58 in Simonson's survey. Paragraph 47 does not assert the conclusion Oracle challenges; it states only that 116 of 152 respondents started developing Android applications at some point between 2007 and 2015 (Simonson Rpt. ¶ 47). Oracle likely meant to refer to paragraph 48, which explains that 66 of the 116 respondents who developed Android applications identified "User base/Market share/Demand/Popularity/ROI [return on investment]" as their first consideration (id. ¶ 48). --------

Simonson's methodology was to ask respondents to list and rank the importance of various decision-making factors (Simonson Rpt. ¶¶ 40, 48, 57). Oracle fails to undermine the adequacy of Simonson's methodology for addressing the question of what motivates developers to develop for a particular platform. Oracle's argument is essentially that because Simonson's methodology produced one particular finding that is unhelpful to the ultimate purpose of the survey, the entire survey should be excluded. This argument is meritless. If Simonson has evidence that popularity is a main consideration for developers, and Oracle has evidence that Android was not popular in 2007-09, both can be presented to the jury to consider as they see fit in determining what motivated developers to develop for Android in 2007-09.

Moreover, Simonson's survey as a whole would still be relevant because it identified and weighed the relative importance of multiple factors affecting developer decisions. In claiming Android initially had no user base, Oracle states, "developers had to be motivated by something else" (Pl.'s Reply MIL No. 5 at 1). Insofar as Simonson's survey is probative of what that "something else" might be and finds that "something else" was likely not prior familiarity with the programming language, it is relevant to factual disputes at issue in this case. Oracle neglects to even mention this other key finding of Simonson's survey: that prior familiarity with a programming language is not an important consideration to developers in general (Simonson Rpt. ¶¶ 40, 48, 57). This finding is relevant to, and probative of, the issue of whether and to what extent Google's copying drove Android's success. That the same methodology which produced this finding also produced other, perhaps less probative findings does not warrant exclusion of Simonson's entire survey and conclusions.

Simonson's survey, however, did not attempt to parse out the various components of a platform's "popularity," nor did it attempt to examine any of the other factors identified as significant by developer respondents. For example, the survey did not define, much less explain, what constitutes an "established user base," or consider what factors might contribute to market demand for a particular platform. The survey thus provides insufficient basis for any expert opinion as to why Android, or any platform, was or was not popular at any given point in time. Disputed facts at issue in this case, however, include whether, when, and to what extent Google's copying of the declaring code and SSO of 37 APIs contributed to Android's overall success, including its popularity, user base, and market presence. Due to this potential overlap in common terminology, Simonson's survey and opinions could confuse or mislead the jury. Therefore, Simonson is expressly prohibited from attempting to define or analyze specific factors like "popularity," "established user base," or "market demand" in his survey results, insofar as those terms were not specifically defined or analyzed in the survey questionnaire. See Fed. R. Evid. 403. Simonson is also specifically prohibited from opining as to whether or how specific factors contribute to a platform's overall success. See id. This prohibition does not, however, limit Simonson's ability to testify as to his survey results to the extent they indicate what factors developers in general consider in deciding whether to develop for a particular platform.

3. SURVEY RESPONDENTS.

Oracle raises two objections to Simonson's survey sample. First, Oracle points out that most developers surveyed were not developing applications for Android in 2007-09. Second, Oracle contends Simonson's screening for respondents was too broad because he included not only developers who actually decided which platforms to develop for, but also those who only influenced such decisions. Oracle essentially claims the only "proper universe" of people for this survey would have been developers who actually made the decision to develop applications for Android in 2007-09.

The standard for a "proper universe" of respondents, such that a survey would be sufficiently reliable to be admissible, is not as demanding as Oracle claims. Oracle cites three decisions to support its position: Kwan, 2014 WL 572290; ThermoLife Int'l, LLC v. Gaspari Nutrition, Inc., No. CV-11-01056-PHX-NVW, 2014 WL 99017 (D. Ariz. Jan. 10, 2014) (Judge Neil V. Wake) (vacated and remanded); and Reinsdorf v. Skechers U.S.A., 922 F. Supp. 2d 866 (C.D. Cal. Feb. 6, 2013) (Judge Dean D. Pregerson). Each decision is distinguishable. Moreover, as explained below, the standard Oracle proposes for survey admissibility was recently rejected by the Ninth Circuit when it overruled the ThermoLife decision.

In Kwan, 2014 WL 572290, at *4-5, the court excluded an expert's survey and opinions that were proffered to support a false advertising claim arising from advertising for photo software. The survey purported to show that the advertisements at issue were likely to mislead or confuse consumers. However, the survey did not focus on potential users of the software; its respondents were not even people who would see the alleged misrepresentations, much less potential purchasers of the software. The proffering party "made no attempt to show" the survey's probative value despite its unrepresentative sample. Id. at *5. The survey was thus inadmissible because the proffering party had not shown that it was relevant or reliable.

In contrast, Simonson ensured that at least half of his respondents developed applications specifically for Android (Simonson Rpt., Exh. E). He also compared the responses of Android developers to those of developers for other platforms, and found them to be consistent with each other (Simonson Rpt. ¶ 49) This analysis showed no significant distinctions between the motivations of Android developers and developers in general, such that the survey would be unacceptably unrepresentative. Moreover, unlike the expert in Kwan, Simonson does not purport to draw specific conclusions (i.e., about the motivations of Android developers in 2007-09), but offers more general conclusions about developers' motivations in general (id. ¶ 12). His conclusions are thus adequately supported by his methodology.

In ThermoLife, 2014 WL 99017, at *2, the court excluded an expert's survey and opinions that purported to determine whether certain statements about a product affected consumers' buying decisions. The survey did not state when it was conducted or how participants were solicited. It made no attempt to show that survey respondents were representative of potential consumers of the products at issue. Specifically, survey respondents included consumers who could not have used the specific product at issue for at least two years at the time of the survey. Survey questions were worded to obtain a biased response favorable to the proffering party. And the conclusions the expert drew from the survey exceeded the scope of the survey's findings in favor of the proffering party.

Unlike the survey in ThermoLife, Simonson's survey explained how it was conducted and how participants were solicited (Simonson Rpt. ¶¶ 18, 24). As described above, the survey attempted to show that its respondents were representative of the studied population. The survey questions were not worded to obtain biased responses favorable to the proffering party (id., Exh. E). And, as explained above, Simonson's conclusions do not exceed the scope of his survey.

Notably, the Ninth Circuit recently vacated and remanded the ThermoLife decision, finding among other things that the district court improperly excluded the proffered survey and accompanying expert opinion evidence. Thermolife Int'l v. Gaspari Nutrition, No. 14-15180, 2016 U.S. App. LEXIS 6807, at *4-7 (9th Cir. Apr. 14, 2016). Specifically, the Ninth Circuit concluded that "[a]lthough the district court faulted the survey's biased questions and unrepresentative sample, neither defect was so serious as to preclude the survey's admissibility." Id. at *6. Objections based on such defects went only to the weight, not the admissibility, of the survey. Moreover, the court explicitly observed that the survey included respondents from both what the district court deemed the relevant consumer class, and a more general consumer population that was merely probative of the specific class at issue. The court found this mixed sample "did not severely limit the probative value of the survey's results." Ibid. (internal citations omitted).

In Reinsdorf, 922 F. Supp. 2d at 873, the court excluded an expert's survey and opinions that purported to test brand recognition but were proffered as evidence that "one can fairly easily parse how much of the audience appeal of the work originates from the various elements." The survey provided no basis to indicate how its sample was selected, or why its respondents were representative of the relevant population. The survey format used images that produced biased, unreliable results, and provided respondents with no basis for meaningful brand comparison. The proffering party made "virtually no attempt to defend [the expert's] methods," and could not identify any scientific principles underlying the survey, which appeared to violate numerous accepted practices in the field of survey research. Id. at 878-79.

Simonson's survey does not share the flaws of the survey in Reinsdorf. As explained above, he does not purport to draw conclusions beyond the scope of his survey. The survey itself explained how its sample was chosen, and why its respondents were representative of the studied population. The survey format was not designed to produce biased results. Google, unlike the proffering party in Reinsdorf, defends Simonson's methods. Simonson identified the scientific principles underlying his survey (Simonson Rpt. ¶¶ 17, 23-24). And the survey did not appear to violate numerous accepted practices in the field of survey research.

Oracle does not dispute that Simonson's randomly selected sample of 152 developers is representative of the mobile application developer population (see id. ¶ 10). Oracle's objection is essentially that the motivations of these 152 developers are not representative of the motivations of decision-making Android developers in 2007-09. However, none of the decisions cited by Oracle go so far as to suggest that a survey is inadmissible unless its sample was exactly representative of the studied population within the precise timeframe at issue. In fact, in its decision remanding ThermoLife, our court of appeals explicitly rejected such an approach, holding the district court abused its discretion where it excluded a survey because, among other defects, the sample included both directly relevant respondents and respondents who were only generally probative of the relevant population. Oracle's argument essentially relies on the reasoning of the ThermoLife decision, now rejected by the court of appeals. That error will not be repeated here.

Therefore, as long as Simonson does not purport to draw conclusions specific to Android developers in 2007-09, his survey sample did not need to be limited to respondents from that population in order to produce reliable results. As a precaution, Simonson will be required to clarify that his survey results indicate the motivations of developers in general, not the specific motivations of Android developers within the 2007-09 timeframe. Simonson may attempt to explain why and how his findings and conclusions are nonetheless probative of what motivated Android developers in 2007-09, subject to cross-examination and the presentation of contrary evidence.

Oracle further argues that making an independent decision to develop for a platform is different from influencing a decision to develop for a platform, but it is unclear how this distinction would render Simonson's survey inadmissible. Simonson's survey and opinion purport to show what attracts developers to a platform. The motivations of "influencing" developers may be less probative of this issue than the motivations of "decision-making" developers, but they are still probative insofar as they contributed to the overall attractiveness of a platform to developers.

Oracle also provides no basis for the suggestion that Android developers have different motivations than developers in general in choosing which platform to develop for. Moreover, Simonson's four ultimate conclusions do not purport to be specific to Android developers (Simonson Rpt. ¶ 12). Rather, his conclusions speak to the motivations of developers in general — which is appropriate given his survey sample. He specifically ensured that at least half of his sample consisted of Android developers to show that Android developers' motivations do not differ significantly from developers' motivations in general, and to demonstrate the probative value of his survey (see id. ¶ 49, Exh. E). If there is other admissible evidence of discrepancies between the motivations of Android developers and those of other developers, such evidence could be presented to challenge the weight of Simonson's survey and opinion at trial. Unless the motivations of developers in general shared no significant overlap with those of Android developers, such discrepancies would not invalidate Simonson's survey so as to render it inadmissible. However, if Simonson attempts to testify at trial about new conclusions specific to Android developers that are not adequately supported by his survey methodology, Oracle may object at that time.

4. SURVEY TIMEFRAME.

Oracle also argues that respondents in the survey who developed applications in 2007-09 are unlikely to remember the details of their decision-making processes from that time. While not explicit, the point of this argument is presumably that Simonson's survey is unreliable because its results are based on unreliable memories. Oracle contends, and Toubia's report echoes, that well-accepted survey methodology discourages surveys that purport to study things that happened long ago (Toubia Rpt. ¶¶ 21-25, 37). These criticisms appear targeted to Questions 5 and 6, which asked respondents what year they started offering mobile applications, and what factors or considerations led to their decision to develop those applications for specific platforms (id., Exh. E).

Google and Simonson defend these questions by claiming decisions to develop for a new platform are "high involvement" or major decisions that people tend to remember well, relative to their memories of "autobiographical" information. Both Oracle and Google cite to two articles for the general proposition that autobiographical memories deteriorate over time. Contrary to Oracle's claim that Google does not refute the literature cited by Toubia, Google contends that literature on autobiographical memory is inapplicable in this situation because the decision to develop for a new platform is not an "autobiographical" event. Toubia also cites two of Simonson's own articles for the proposition that recall issues can interfere with research results (Toubia Rpt. ¶ 25). One of those articles specifically noted that the ease with which consumers choose between options affects how they remember the positive and negative components of those options. Nathan Novemsky et al., Preference Fluency in Choice, 44 J. MARKETING RES. 347, 354 (2007).

These sources indicate that responses to Questions 5 and 6 may have been affected by imperfect recall. However, this is not a fatal flaw of the survey methodology such that the entire survey needs to be excluded. Potential issues with recall bias or imperfect recall go to the weight of Simonson's findings and are appropriate to bring up on cross-examination, or through the introduction of other admissible evidence. See Medlock v. Taco Bell Corp., No. 1:07-cv-01314-SAB, 2015 WL 8479320, at *5 (E.D. Cal. Dec. 9, 2015) (Magistrate Judge Stanley A. Boone); see also Classic Foods Intern. Corp. v. Kettle Foods, Inc., No. SACV 04-725 CJC (Ex), 2006 WL 5187497, at *7 (C.D. Cal. Mar. 2, 2006) (Judge Cormac J. Carney) (noting that "no survey is perfect," and "flaws in the survey may be elucidated on cross-examination, so that the finder of fact can appropriately adjust the weight it gives to the survey's results").

In general, many of Oracle's objections to Simonson are to the effect that his survey methodology was not optimal, or that its technical components were imperfect. However, Oracle falls short of actually demonstrating unreliability sufficient to warrant exclusion under Daubert. Most of the alleged deficiencies are of the sort that juries would properly consider in assessing the probative value of a survey. They therefore go to the survey's weight, not to its admissibility. Southland Sod Farms v. Stover Seed Co., 108 F.3d 1134, 1143 (9th Cir. 1997) (criticisms of a survey's design, format, or limited scope went to its weight, not admissibility); Prudential Ins. Co. of Am. v. Gibraltar Fin. Corp. of Cal., 694 F.2d 1150, 1156 (9th Cir. 1982) ("Technical unreliability goes to the weight accorded a survey, not its admissibility."); but see Brighton Collectibles, Inc. v. RK Texas Leather Mfg., 923 F. Supp. 2d 1245, 1257, n.8 (S.D. Cal. Feb. 12, 2013) (Judge Gonzalo P. Curiel) (Prudential's broad statement must be construed in light of Daubert and the court's gatekeeping obligation).

5. INTERPRETATION OF SURVEY RESULTS.

Simonson's survey found that 62% of respondents identified "User base/Market share/Demand/Popularity/ROI" as the first consideration for developers in deciding whether to develop for a particular platform (Simonson Rpt. ¶ 40). Simonson interpreted this result to support his conclusions that "demand (or expected demand) and related economic considerations (such as ROI)" are the primary factors in development decisions, while prior familiarity with the programming language is a "less important, secondary" factor (ibid.). Oracle points out, however, that programming language factors into the ROI because prior familiarity with the language used lowers the "investment" cost to the developer of working with a new platform. Thus, Oracle contends, prior familiarity with the programming language is in fact a "significant factor," which contradicts Simonson's opinion.

Again, it is unclear why Oracle's argument compels the exclusion of Simonson's survey and opinion. Simonson does not deny that prior familiarity with the programming language is a factor considered by developers, or that ROI is part of "User base/Market share/Demand/Popularity/ROI." He concluded only that, based on survey results, economic considerations are relatively more important than prior familiarity with a programming language (Simonson Rpt. ¶ 40). This is supported by survey results that although 62% of respondents identified some form of "User base/Market share/Demand/Popularity/ROI" as their primary consideration, only one respondent actually listed "ROI" as a primary consideration (id., Exh. F, Table 4, at 4). How much prior familiarity with the programming language contributes to ROI, and in turn to the decision to develop for a particular platform, is a factual determination subject to competing interpretations.

Similarly, Oracle's reliance on Kearl's reaction to the survey is misplaced. Kearl said he did not find the survey's questions "particularly interesting" because "nobody would admit that they would have a hard time learning something new," ostensibly referring to the survey's questions on how easily developers could learn a new language (see id., Exh. E). None of his comments actually challenged the survey's relevance or reliability. These quotes from Kearl provide no basis for exclusion. As cited by Oracle, they are essentially personal or ipse dixit opinions, not expert conclusions or evidence. Even if they were expert opinions, they would be properly raised by competing experts at trial, not as a basis for exclusion.

The parties may disagree as to the precise implications of the survey results, and of course do disagree as to the greater issue of how much Google's copying of the declaring code and SSO of the 37 APIs factored into Android's success. But these disagreements do not suggest Simonson's opinion is so unfounded as to be inadmissible. To the extent that Oracle challenges Simonson's conclusions, but not the survey methodology or results they are reasonably based on, such critiques go to the weight of the survey rather than its admissibility. See Clicks Billiards, Inc. v. Sixshooters, Inc., 251 F.3d 1252, 1265 (9th Cir. 2001) (critiques of a survey's conclusions go to the survey's weight rather than its admissibility).

6. LACK OF SURVEY CONTROL GROUP.

Oracle also contends that Simonson's lack of a control group is fatal to the admissibility of his survey. Oracle's reasoning seems to be: Simonson purports to measure a kind of "causation," that is, how specific factors affect developers' decisions; a survey that purports to measure causation must include a proper control; therefore, Simonson's survey needed a proper control. Oracle cites Shari S. Diamond, Reference Guide on Survey Research, in REFERENCE MANUAL ON SCI. EVIDENCE 359, 397-98 (3d ed. Fed. Jud. Ctr. 2011), as well as two of Simonson's previous reports, for the proposition that a survey that purports to measure causation must include a control group.

The surveys contemplated by those sources, however, attempted to measure how the introduction of a particular stimulus was causally linked to a particular outcome (e.g., how publication of a particular advertisement may have caused consumer confusion). Diamond, supra, at 397-98; Itamar Simonson Report at ¶ 45, Safe Auto Ins. Co. v. State Auto. Mut. Ins. Co., No. 2:07-cv-1121 (S.D. Ohio Oct. 27, 2008); Itamar Simonson Report at ¶ 44, Larin Corp. v. Alltrade, Inc., No. EDCV 06-1394 ODW (OPx) (C.D. Cal. Feb. 15, 2008). Under such circumstances the produced outcome (e.g., consumer confusion) may have been caused by preexisting conditions (e.g., preexisting consumer beliefs) rather than the tested stimulus, so it makes sense to use a control group that has not been exposed to the stimulus as a baseline against which to measure the stimulus's effects.

However, a control group is not required for a survey that purports only to understand what developers perceive as relatively more or less important factors in their decision-making process (Simonson Dep. at 98-99). As Google points out, Simonson did not attempt to test the effect of a stimulus, so there was nothing to control for. Oracle characterizes the absence of a control group as a fatal flaw in Simonson's survey, but does not explain what stimulus required controlling, or why a "control group" was required under these circumstances. Rather, Oracle vaguely asserts that without a control, Simonson "cannot determine if his survey results are accurate, or reflect confounding factors or a flawed survey design." Oracle does not define or otherwise clarify what it means by "confounding factors," much less explain how such factors necessitated a control group for the survey to be reliable. In short, Oracle has not successfully challenged Simonson's explanation that a control group was not required in this survey to produce sufficiently reliable results.

7. INCLUSION OF PRETEST RESULTS.

After comparing results from both the pretest of 23 respondents and the full-scale survey, Simonson decided to include results from the pretest in his final results (Simonson Dep. at 181). Oracle contends this inclusion violated generally accepted standards for survey research, because Simonson knew how the pretest results would affect his overall results, and thus used the pretest to artificially alter the outcome of his survey. Oracle and Toubia cite Erin Ruel et al. for the proposition that this "violates established survey practice" (Toubia Rpt. ¶ 62). See ERIN RUEL ET AL., SURVEY RESEARCH: THEORY AND APPLICATIONS 117 (2016). Erin Ruel et al. explain that if the survey is modified between the pretest and full test, as it was here, "data collected in the pretest . . . could be inaccurate or biased compared to the results of the full-scale study." Ibid. They acknowledge that "it may be unreasonable to exclude [pretest] participants from the entire study, especially in small-scale studies," but add that under those circumstances, "comparison and discussion of the differences between the pretested groups and the full-scale group is necessary. It is also important to exercise caution when interpreting these results, and it is important to note this potential data contamination as a possible limitation of the research." Ibid.

Google and Simonson's counterargument that the pretest results agreed with the overall results of the survey is beside the point. The issue is not whether the pretest results accorded with the full-scale survey results, but whether both were achieved using uniform methodology so as to produce reliably similar results. Google and Simonson do not dispute that the survey was modified between the pretest and full-scale survey. Google's characterization of these modifications as "minor" and "cosmetic" is disingenuous. Simonson himself explained that one question was changed because the pretest suggested it was misinterpreted by some respondents, and another entirely new question was added without explanation (see Simonson Rpt. ¶ 22). These are hardly "cosmetic" changes. For example, pretest respondents who "misinterpreted" the original Question 8 may have responded differently had they been asked the modified Question 8 (Simonson Rpt., Exh. E). Or it may be, as Oracle suggested, that Simonson added a new question because his initial screening questions were overbroad. At minimum, the new question could raise concerns as to differences in scope or sample, and therefore reliability, between the pretest and full-scale surveys.

Nonetheless, after conducting the pretest and modifying the survey questionnaire, Simonson included the pretest results in his overall results without any comparison or discussion of differences between the pretest and full-scale groups, or acknowledgment of how this inclusion might have limited the survey's reliability or conclusions. Moreover, as Oracle points out, the specific results aside, the inclusion of 23 additional data points in the sample size in and of itself bolsters the credibility of Simonson's survey and its results are favorable to Google.

It is no defense to say that Simonson's decision to include pretest results was harmless because those results were "very similar" to the full-scale survey results (see Oracle Exh. 26, Simonson Dep. at 181). The point is that insofar as Simonson improperly authorized himself to decide whether or not to include a particular set of data after he discovered how that data would affect his overall results, his methodology was unreliable.

Any portion of Simonson's survey or opinions based on pretest results is therefore STRICKEN. Simonson may still refer to the survey's size, statistical significance, or respondents, but in doing so he must refer only to the full-scale survey, and he must modify any specific numerical findings accordingly.

8. LATE SUBMISSION OF SIMONSON'S REPORT.

Admissibility issues aside, Oracle contends Simonson should not be permitted to testify in Phase I because he submitted his report after the January 8, 2016 deadline for Google's expert reports on fair use. Excluding expert evidence is an "automatic" sanction for failure to disclose information in a timely fashion unless the proffering party can show the violation is either substantially justified or harmless. Fed. R. Civ. P. 26(a)(2)(D), 37(c)(1); see also Goodman v. Staples The Office Superstore, LLC, 644 F.3d 817, 827 (9th Cir. 2011); R & R Sails, Inc. v. Ins. Co. of Pa., 673 F.3d 1240, 1246 (9th Cir. 2012). Google does not challenge this contention in its opposition to Oracle's motion. This is ultimately a moot issue, since Google confirmed it did not intend to offer Simonson in its case-in-chief on fair use (Def.'s Opp. to Pl.'s MIL No. 5 at 1 n.1).

CONCLUSION

For the foregoing reasons, the Court GRANTED IN PART and DENIED IN PART Oracle's fifth motion in limine. As stated in the final pretrial order, Simonson must clarify that his survey results indicate the motivations of developers in general, not the specific motivations of Android developers within the 2007-09 timeframe. He may, however, attempt to explain why and how his findings and conclusions are nonetheless probative of what motivated Android developers in 2007-09, subject to cross-examination and the presentation of contrary evidence.

Simonson may not attempt to define or analyze specific factors like "popularity," "established user base," or "market demand" in his survey results, insofar as those factors are not specifically defined or analyzed in the survey questionnaire. He also may not opine as to whether or how specific factors contribute to a platform's overall success. He may, however, testify as to his survey results to the extent that they indicate what factors developers in general consider in deciding whether to develop for a particular platform.

Any portion of Simonson's survey or opinions based on pretest results is STRICKEN. Any references to the size of the survey, its statistical significance, or its respondents may be based only on the full-scale survey and its results. Dated: May 2, 2016.

/s/_________

WILLIAM ALSUP

UNITED STATES DISTRICT JUDGE


Summaries of

Oracle Am., Inc. v. Google Inc.

UNITED STATES DISTRICT COURT FOR THE NORTHERN DISTRICT OF CALIFORNIA
May 2, 2016
No. C 10-03561 WHA (N.D. Cal. May. 2, 2016)
Case details for

Oracle Am., Inc. v. Google Inc.

Case Details

Full title:ORACLE AMERICA, INC., Plaintiff, v. GOOGLE INC., Defendant.

Court:UNITED STATES DISTRICT COURT FOR THE NORTHERN DISTRICT OF CALIFORNIA

Date published: May 2, 2016

Citations

No. C 10-03561 WHA (N.D. Cal. May. 2, 2016)