methods are all read from a file with its name hard-wired as xmlspec.xsl in this benchmark.For this particular input file provided by DaCapo,these two calls are never executed and thus annotated to be disregarded.With these two annotations,SOLAR terminates in 28 minutes with its unsound list being empty. 4.4.3 checkstyle PROBE reports no unsoundly resolved call.To see why So- LAR is unscalable,we examine one invoke()call in line 1773 of Fig.12 found automatically by PROBE that stands out as being possibly imprecisely resolved. There are 962 target meth- Class:org.apache.commons.beanutils.PropertyUtilsBea opertyDes otors(Object b) ods inferred at this call site 926 retum getPropertyDescriptors(b.getClass()):<Entry> PROBE highlights its correspond- Class:java.beans(ntrospector.getPublicDeclaredMethods 1275 Method[]getPublicDeclaredMethods(Class clz){ ing member-introspecting method 1294 return z.getMethods():<Member-introspecting>} clz.getMethods()(in line 1294) 与[Annotation Pointl Class:org.bpache.commons.beanutils.PropertyUtilsBean and its entry methods (with one 1764 Objeci nvokeMethod(Method m,Object o,Object[]v) 1773 retum'm.invoke(o.v):idE) of these being shown in line 926). [Imprecise Location] Based on this.we find easily by code inspection that the target methods Fig.12.Probing checkstyle. called reflectively at the invoke() call are the setters whose names share the prefix "set".As a result,the clz.getMethods()call is annotated to return 158 "setX"methods in all the subclasses of AutomaticBean. In addition,the Method objects created at one getMethods()call and one getDeclaredMethods()call in class *beanutils.MappedPropertyDescriptor$1 flow into the invoke()call in line 1773 as false positives due to imprecision in the pointer analysis.These Method objects have been annotated away. After the three annotations,SOLAR is scalable,terminating in 38 minutes. Given the same annotations,existing reflection analyses [5,17,20,21]still cannot handle the invoke()call in line 1773 soundly,because its argument o points to the objects that are initially created at a newInstance()call and then flow into a non-post-dominating cast operation (like the one in line 12 Fig.1). However,SOLAR has handled this invoke()call soundly by using LHM,high- lighting once again the importance of collective inference in reflection analysis. 4.5 RQ3:Recall and Precision To compare the effectiveness of DooP,ELF and SOLAR as under-approximate reflection analyses.it is the most relevant to compare their recall,measured by the number of true refective targets discovered at reflective call sites that are dynamically executed under certain inputs.In addition,we also compare their (static)analysis precision with two clients,but the results must be looked at with one caveat.Existing reflection analyses can happen to be "precise"due to their highly under-approximated handling of reflection.Therefore,our precision results are presented to show that SoLAR exhibits nearly the same precision as prior work despite its significantly improved recall achieved for real code. Unlike Doop and ELF,SOLAR can automatically identify "problematic"re flective calls for lightweight annotations.To ensure a fair comparison,the three annotated programs shown in Fig.10 are used by all the three analysesmethods are all read from a file with its name hard-wired as xmlspec.xsl in this benchmark. For this particular input file provided by DaCapo, these two calls are never executed and thus annotated to be disregarded. With these two annotations, Solar terminates in 28 minutes with its unsound list being empty. 4.4.3 checkstyle Probe reports no unsoundly resolved call. To see why Solar is unscalable, we examine one invoke() call in line 1773 of Fig. 12 found automatically by Probe that stands out as being possibly imprecisely resolved. Class: org.apache.commons.beanutils.PropertyUtilsBean 921 PropertyDescriptor[] getPropertyDescriptors(Object b) { 926 return getPropertyDescriptors(b.getClass()); } Class: java.beans.Introspector.getPublicDeclaredMethods 1275 Method[] getPublicDeclaredMethods(Class clz) { 1294 return clz.getMethods(); Class: org.apache.commons.beanutils.PropertyUtilsBean 1764 Object invokeMethod(Method m, Object o, Object[] v) { 1773 return m.invoke(o, v); <Entry> <Member-Introspecting> <Side-Effect> [Annotation Point] [Imprecise Location] } } Fig. 12. Probing checkstyle. There are 962 target methods inferred at this call site. Probe highlights its corresponding member-introspecting method clz.getMethods() (in line 1294) and its entry methods (with one of these being shown in line 926). Based on this, we find easily by code inspection that the target methods called reflectively at the invoke() call are the setters whose names share the prefix “set”. As a result, the clz.getMethods() call is annotated to return 158 “setX” methods in all the subclasses of AutomaticBean. In addition, the Method objects created at one getMethods() call and one getDeclaredMethods() call in class *.beanutils.MappedPropertyDescriptor$1 flow into the invoke() call in line 1773 as false positives due to imprecision in the pointer analysis. These Method objects have been annotated away. After the three annotations, Solar is scalable, terminating in 38 minutes. Given the same annotations, existing reflection analyses [5, 17, 20, 21] still cannot handle the invoke() call in line 1773 soundly, because its argument o points to the objects that are initially created at a newInstance() call and then flow into a non-post-dominating cast operation (like the one in line 12 Fig. 1). However, Solar has handled this invoke() call soundly by using LHM, highlighting once again the importance of collective inference in reflection analysis. 4.5 RQ3: Recall and Precision To compare the effectiveness of Doop, Elf and Solar as under-approximate reflection analyses, it is the most relevant to compare their recall, measured by the number of true reflective targets discovered at reflective call sites that are dynamically executed under certain inputs. In addition, we also compare their (static) analysis precision with two clients, but the results must be looked at with one caveat. Existing reflection analyses can happen to be “precise” due to their highly under-approximated handling of reflection. Therefore, our precision results are presented to show that Solar exhibits nearly the same precision as prior work despite its significantly improved recall achieved for real code. Unlike Doop and Elf, Solar can automatically identify “problematic” re- flective calls for lightweight annotations. To ensure a fair comparison, the three annotated programs shown in Fig. 10 are used by all the three analyses