SimilaR: R Code Clone and Plagiarism Detection

Third-party software for assuring source code quality is becoming increasingly popular. Tools that evaluate the coverage of unit tests, perform static code analysis, or inspect run-time memory use are crucial in the software development life cycle. More sophisticated methods allow for performing meta-analyses of large software repositories, e.g., to discover abstract topics they relate to or common design patterns applied by their developers. They may be useful in gaining a better understanding of the component interdependencies, avoiding cloned code as well as detecting plagiarism in programming classes. A meaningful measure of similarity of computer programs often forms the basis of such tools. While there are a few noteworthy instruments for similarity assessment, none of them turns out particularly suitable for analysing R code chunks. Existing solutions rely on rather simple techniques and heuristics and fail to provide a user with the kind of sensitivity and speciﬁcity required for working with R scripts. In order to ﬁll this gap, we propose a new algorithm based on a Program Dependence Graph, implemented in the SimilaR package. It can serve as a tool not only for improving R code quality but also for detecting plagiarism, even when it has been masked by applying some obfuscation techniques or imputing dead code. We demonstrate its accuracy and efﬁciency in a real-world case study.


Introduction
In recent years there has been a rise in the availability of tools related to code quality, including inspecting run-time memory usage (Serebryany et al., 2012), evaluating unit tests coverage (Ammann and Offutt, 2013), discovering abstract topics to which source code is related (Grant et al., 2012;Tian et al., 2009;McBurney et al., 2014;Linstead et al., 2007;Maskeri et al., 2008), finding parts of code related to a particular bug submission (Lukins et al., 2008), and checking for similarities between programs. With regards to the latter, quantitative measures of similarity between source code chunks play a key role in such practically important areas as software engineering, where encapsulating duplicated code fragments into functions or methods is considered a good development practice or in computing education, where any cases of plagiarism should be brought to a tutor's attention, see (Misic et al., 2016;Mohd Noor et al., 2017;Roy et al., 2009;Rattan et al., 2013;Ali et al., 2011;Hage et al., 2011;Martins et al., 2014). Existing approaches towards code clone detection can be classified based on the abstraction level at which they inspect programs' listings.
• Textual -the most straightforward representation, where a listing is taken as-is, i.e., as raw text. Typically, string distance metrics (like the Levenshtein one; see, e.g., van der Loo, 2014) are applied to measure similarity between pairs of the entities tested. Then some (possibly approximate) nearest neighbour search data structures seek matches within a larger code base. Hash functions can be used for the same purpose, where fingerprints of code fragments might make the comparison faster, see, e.g., (Johnson, 1993;Manber, 1994;Rieger, 2005). Another noteworthy approach involves the use of Latent Semantic Analysis (Marcus and Maletic, 2001) for finding natural clusters of code chunks.
• Lexical (token-based) -where a listing is transformed into tokens, which are generated by the parser during the lexical analysis stage. This form is believed to be more robust than the textual one, as it is invariant to particular coding styles (indentation, layout, comments, etc.). Typically, algorithms to detect and analyse common token sub-sequences are used Ueda et al., 2002;Li et al., 2006;Wise, 1992;Prechelt et al., 2000;Hummel et al., 2011;Schleimer et al., 2003).
can be compared against each other directly in a pairwise fashion (Mayrand et al., 1996;Patenaude et al., 1999;Fu et al., 2017) or by means of some cluster analysis-based approach (Jiang et al., 2007).
• Semantic -the most sophisticated representation involving a set of knowledge-based, languagedependent transformations of a program's abstract syntax tree. Usually, a data structure commonly known as a Program Dependence Graph (PDG) is created, see below for more details. In such a data structure, the particular order of (control-or data-) independent code lines is negligible. A popular approach to measure similarity between a pair of PDGs concerns searching for (sub)isomorphisms of the graphs, see (Komondoor and Horwitz, 2001;Liu et al., 2006;Qu et al., 2014).
There are a few generally available software solutions whose purpose is to detect code clones, e.g., MOSS (see http://theory.stanford.edu/~aiken/moss/ and Schleimer et al., 2003) and JPlag (see http://jplag. de/ and Prechelt et al., 2000), see also (Misic et al., 2016;Vandana, 2018) for an overview. These tools are quite generic, offering built-in support for popular programming languages such as Java, C#, C++, C, or Python.
Unfortunately, there is no package of this kind that natively supports the R language, which is the GNU version of S (see, e.g., Becker et al., 1998;Venables and Ripley, 2000). It is a serious gap: R is amongst the most popular languages 1 , and its use has a long, successful track record, particularly with respect to all broadly-conceived statistical computing, machine learning, and other data science activities (Wickham and Grolemund, 2017). With some pre-processing, MOSS and JPlag can be applied on R code chunks, but the accuracy of code clones detection is far from optimal. This is due to the fact that, while at a first glance being an imperative language, R allows plenty typical functional constructs (see the next section for more details and also, e.g., Chambers, 1998;Wickham, 2014;Chambers, 2008). On the one hand, its syntax resembles that of the C language, with curly braces to denote a nested code block and classical control-flow expressions such as if..else conditionals, or while and for (for each) loops. On the other hand, R's semantics is based on the functional Scheme language (Abelson et al., 1996), which is derived from Lisp. Every expression (even one involving the execution of a for loop) is in fact a call to a function or any combination thereof, and each function is a first-class object that (as a rule of thumb) has no side effects. Moreover, users might choose to prefer applying Map-Filter-Reduce-like expressions on container objects instead of the classical control-flow constructs or even mix the two approaches. Also, the possibility of performing the so-called nonstandard evaluation (metaprogramming) allows to change the meaning of certain expressions during run-time. For instance, the popular forward-pipe operator, %>%, implemented in the magrittr (Bache and Wickham, 2014) package, allows for converting a pipeline of function calls to a mutually nested series of calls.
In this paper we describe a new algorithm that aims to fill the aforementioned gap (based on Bartoszuk, 2018). The method's implementation is included in the SimilaR 2 package. It transforms the analysed code base into a Program Dependence Graph that takes into account the most common R language features as well as the most popular development patterns in data science. Due to this, the algorithm is able to detect cases of plagiarism quite accurately. Moreover, thanks to a novel, polynomial-time approximate graph comparison algorithm, its implementation has relatively low run-times. This enables to conduct an analysis of a software repository whose size is significant.
This paper is set out as follows. First we introduce the concept of a Program Dependence Graph along with its R language-specific customisations. Then we depict a novel algorithm for quantifying similarity of two graphs. Further on we provide some illustrative examples for the purpose of showing the effects of applying particular alterations to a Program Dependence Graph. What is more, we demonstrate the main features of the SimilaR package version 1.0.8. Then we perform an experiment involving the comparison of the complete code-base of two CRAN packages.

Program Dependence Graph
A Program Dependence Graph (PDG) is a directed graph representing various relations between individual expressions in a source code chunk. As we mentioned in the introduction, it is among the most sophisticated data structures used for the purpose of code clones detection. First proposed by Ferrante et al. (1987), it forms the basis of many algorithms, see, e.g., (Liu et al., 2006;Qu et al., 2014;Gabel et al., 2008;Krinke, 2001;Horwitz and Reps, 1991;Komondoor and Horwitz, 2001;Ghosh and Lee, 2018;Nasirloo and Azimzadeh, 2018). 1 For instance, the 2018 edition of the IEEE Spectrum ranking places R on the No. 7 spot, see http://spectrum. ieee.org/at-work/innovation/the-2018-top-programming-languages.
2 See https://CRAN.R-project.org/package=SimilaR. SimilaR can be downloaded from the Comprehensive R Archive Network (CRAN) repository (Silge et al., 2018) and installed via a call to install.packages("SimilaR").

Abstract Syntax Tree.
To create a PDG, we first need to construct an Abstract Syntax Tree (AST) of a given program. In R, it is particularly easy to compute the AST corresponding to any expression, due to its built-in support for reflection that facilitates metaprogramming. For instance, the parse() function can be called to perform lexical analysis of a code fragment, yielding a sequence of language objects. Moreover, a basic version of a function to print an AST takes just few lines of code: R> show_ast <-function(x) { + as.list_deep <-function(x) # convert to a plain list (recursively) + { if (is.call(x)) lapply(as.list(x), as.list_deep) else x } + x <-substitute(x) # expression that generated the argument + str(as.list_deep(x)) # pretty-print + } Let us visualise the AST corresponding to expression d <-sum((x-y)*(x-y)).
R> show_ast(d <-sum((x-y)*(x-y))) In R, both a constant (numeric, logical, string, etc.) and a symbol (name) constitute what we call a simple expression. A compound expression is in turn a sequence of n + 1 expressions (simple or compound ones) f , a 1 , . . . , a n , n ≥ 0, which represents a call to f with arguments a 1 , . . . , a n (which we are typically used to denote as f (a 1 , . . . , a n )).
The above AST can be written in the Polish (prefix) notation as '<-', d, sum, '*', '-', x, y , '-', x, y Such a notation is used in Scheme and Lisp; we skipped a call to '(' for readability, as (e) is equivalent to e for each expression e. Alternatively, the above can be written as '<-'(d, sum('*'('-'(x, y), '-'(x, y)))) in the "functional" form. Let us emphasise that even an application of a binary operator (here: <-, *, and -) corresponds to some function call. Hence x-y is just a syntactic sugar for -(x,y). Moreover, other expressions such as if..else and loops also correspond to some function calls. For example: R> show_ast(for(i in 1:5) { + print("i = ", i) + if (i %% 2 == 0) print(":)") else print(":$") + }) Vertex and edge types. The vertices of a PDG represent particular expressions, such as a variable assignment, a function call or a loop header. Each vertex is assigned its own type, reflecting the kind of expression it represents. The comprehensive list of vertex types for the R language code-base used in SimilaR is given in Table 1. The number of distinct types is a kind of compromise between the algorithm's sensitivity and specificity. It was set empirically based on numerous experiments (Bartoszuk, 2018).
We may also distinguish two types of edges: control dependency and data dependency ones. The former represents the branches in a program's control flow that result in a conditional execution of expressions such as if-else-constructs or loops. A subgraph of a PDG consisting of all the vertices and only the control dependency edges is called a Control Dependence Subgraph (CDS).
The latter edge vertex type is responsible for modelling data flow relations: there is an edge from a vertex v to a vertex u, whenever a variable assigned in the expression corresponding to v is used in the computation of the expression related to u. A spanning subgraph of a PDG that consists solely of the data dependency edges is called a Data Dependence Subgraph (DDS).

Hence, a PDG of a function F() is a vertex-and edge-labelled directed graph
gives the type of each vertex and ξ F : E F → {DATA, CONTROL} marks if an edge is a data-or control-dependency one. Note that each PDG is rooted -there exists one and only one vertex v with indegree 0 and ζ F (v) = Entry.
Example code chunks with the corresponding dependence graphs are depicted in Figures 1 and 2. The meaning of vertex colors is explained in Table 1.  The most basic version of an algorithm to create a PDG based on an abstract syntax tree is described in (Harrold et al., 1993). Let us note the fact that a CDS is a subgraph of an AST: it provides the information about certain expressions being nested within other ones, e.g., that some assignment is part (child) of a loop's body. Additionally, an AST includes a list of local variables and links them with expressions that rely on them. This is a crucial piece of information used to generate DDS.
Note that, however, a PDG created for the purpose of code clones detection cannot be treated as a straightforward extension of a raw AST. The post-processing procedure should be carefully customised taking into account the design patterns and coding practices of a particular programming language. Hence, below we describe the most noteworthy program transforms employed in the SimilaR package so that it is invariant to typical attacks, i.e., transforms changing the way the code is written yet not affecting its meaning.
Unwinding nested function calls. As mentioned above, in R, as in any functional language, functions play a key role. A code chunk can be thought of as a sequence of expressions, each of which is composed of function calls. Base or external library functions are used as a program's building blocks and often very complex tasks can be written with only few lines of code.
For instance, given a matrix X ∈ R d×n representing n vectors in R d and a vector y ∈ R d , the closest vector in X to y with respect to the Euclidean metric can be determined by evaluating X[,which.min(apply((X-y)ˆ2,2,sum))]. This notation is very concise and we can come up with many equivalent forms of this expression written in a much more loquacious fashion.
Therefore, in SimilaR, hierarchies of nested calls, no matter their depth, are always recursively unwound by introducing as many auxiliary assignments as necessary. For instance, f(g(x)) is decomposed as gx <-g(x); f(gx). This guarantees that all their possible variants are represented in the same way in the PDG.
Forward-pipe operator, %>% Related to the above is the magrittr's forward-pipe operator, %>%, which has recently gained much popularity within the R users' community. Even though the operator is just a syntactic sugar for forwarding an object into the next function call/expression, at the time of writing of this manuscript, the package has been used as a direct dependency by over 700 other CRAN packages. Many consider it very convenient, as it mimics the "left-to-right" approach known from object-orientated languages like Java, Python or C++. Instead of writing (from the inside and out) f(g(x),y), with magrittr we can use the syntax x %>% g %>% f(y) (which would normally be represented as x.g().f(y) in other languages). To assure proper similarity evaluation, SimilaR unwinds such expressions in the same manner as nested function calls.

Calls within conditions in control-flow expressions.
An expression serving as a Boolean condition in an if or a while construct might be given as a composition of many function calls. The same might be true for an expression generating the container to iterate over in a for loop. PDG vertices representing such calls are placed on the same level as their corresponding control-flow expressions so that they can be unwound just as any other function call.
Canonicalization of conditional statements. The following code chunk: This exploits the fact that the code after the if statement is only executed whenever the logical condition is false. To avoid generating very different control dependencies, we always unwind the former to the latter by putting the code-richer branch outside of a conditional statement.
Tail call to return(). The return value that is generated by evaluating a sequence of expressions wrapped inside curly braces (the { () function) is determined by the value of its last expression. If a function's body is comprised of such a code block, the call to return() is optional if it is used in the last expression. However, many users write it anyway. Therefore, a special vertex of type return() have been introduced to mark an expression that generates the output of a function.
Map-like functions. Base R supports numerous Map-like operations that are available in many programming languages. The aim of the members of the *apply() family (apply(), lapply(), sapply(), etc.) is to perform a given operation on each element/slice of a given container. These are unwound as an expression involving a for loop. For instance, a call to ret <-lapply(l,fun,...) can be written as ret <-list(); for (el in l) ret[[length(ret)+1]] <-fun(el, ...) Variable duplication. To prevent redundant assignments such as xcopy <-x made just in order to refer to the original value under a new alias, a hierarchical variable dictionary is kept to generate data dependency edges properly.

Memoization.
In pure functional languages it is assumed that functions have no side effects, i.e., the same arguments are mapped to the same return value. In R it is of course not always technically true (e.g., when the pseudo-random number generator is involved), but such an assumption turns out to be helpful in our context. Therefore, if a function call instance is invoked more than once, its value is memorised by introducing a new variable.
Dead code. Many plagiarism detection algorithms can be easily misled by adding random code that does not affect the main computations. In SimilaR, such dead code is identified and removed. This is done by iteratively deleting all vertices whose outdegree is zero (except those of type return).
To sum up, SimilaR guarantees that the Program Dependence Graph is the same regardless of the order of independent function calls, unwinding nested function calls, the use of the forward-pipe operator, etc. Hence, it is invariant to the most typical attacks. Moreover, it has been implemented in such a way that new kinds of transformations can be easily added in the future, as the R development practices and common program design patterns evolve.

Comparing Program Dependence Graphs
In our setting, code similarity assessment reduces to a comparison between a pair of Program Dependence Graphs. In this section we are interested in an algorithm µ such that µ(F, G) ∈ [0, 1] represents a similarity degree between two PDGs F and G. A similarity of 1 denotes that two PDGs are identical, while 0 means that they are totally different. Alternatively, we might be interested in a non-symmetric measureμ(F, G) ∈ [0, 1] representing the degree to which the source code of F is contained within G.
Ideally, an algorithm to compare two PDGs should enjoy the following properties: • it should be flexible in the sense that introducing a "small difference" in one of the graphs should not affect the estimated similarity degree significantly; • it should be fast to execute so that computing numerous pairwise similarities can be performed in a reasonable time span.
Due to the latter, we immediately lose our interest in all currently known exact algorithms to find subgraph isomorphisms or maximum common subgraphs because of their exponential-time complexity (the problems are NP-hard; see, e.g., Wegener, 2005). To recall, two graphs are isomorphic whenever there exists a mapping between the two graphs' vertices preserving the node adjacencies.
In the SimilaR package, we use a modified version (for increased flexibility and better performance in the plagiarism detection problem) of an algorithm described in (Shervashidze et al., 2011), which itself is based on the Weisfeiler-Lehman isomorphism test (Weisfeiler and Lehman, 1968) and graphs kernels. Note that the base method has been successfully used in many applications, e.g., in cheminformatics (Mapar, 2018) and programming autonomous robots (Luperto and Amigoni, 2019).
In each of the h iterations of the SimilaR algorithm, we assign new labels to the vertices of a PDG based on their neighbours' labels. While in the original algorithm (Shervashidze et al., 2011), two vertices are considered diverse already when one of their neighbours has been assigned a different label, here we might still be assigning the same label if the vertices' adjacency differs only slightly. Our approach turns out to be more robust (Bartoszuk, 2018) against minor code changes or some vertices being missing in the graph.
SimilaR algorithm at a glance. Before we describe every step of the algorithm in detail, let us take a look at it from a bird's-eye perspective. If we are to assign each vertex a new label that is uniquely determined by their current type as well the labels allocated to their neighbours, two identical graphs will always be coloured the same way, no matter how many times we reiterate the labelling procedure. In particular, after h iterations, a vertex's label depends on the types of vertices whose distance from it is at most h.
We are of course interested in assigning equivalent labels to vertices in graphs that are not necessarily identical, but still similar to each other. Otherwise, two vertices which have all but one neighbour in common, would get distinct labels. After h iterations, all the vertices at distance at most h would all already be assigned different colours. This would negatively affect the overall graph similarity assessment.
In order to overcome this problem, we introduce the concept of vertex importance, which is based upon the number of vertices that depend on a given node. Only important enough differences in the vertex neighbourhoods will be considered as sufficient to trigger a different labelling. Then, after h iterations, two vectors of label type counts can be compared with each other to arrive at a final graph similarity degree.
SimilaR algorithm in detail. The following description of the SimilaR algorithm will be illustrated based on a comparison between two functions, clamp1() (whose PDG from now on we will denote with F      1. Vertex importance degrees. Firstly, each vertex in F is assigned an importance degree, δ F : where I cond is an indicator function with value 1 if cond is true and 0 otherwise. In other words, a vertex v with outdegree equal to 0 has importance δ(v) = 0.1. Otherwise, its importance degree is set to δ(v) = 0.1 plus the sum of importances of its outgoing control-dependent neighbours plus the sum of importances of its outgoing data-dependent neighbours multiplied by 1.1. Note that if F is an acyclic graph, then it has a topological ordering, i.e., an arrangement of the vertices such that every edge is directed from earlier to later in the sequence. In such a case δ F is well-defined. Otherwise, we shall be computing the importance degrees in the depth-first manner.
Next, the importance degrees are normalised, Tables 2 and 3 give the importance degrees of the vertices in the two graphs studied.
2. Vertex labels. Recall from the previous section that each vertex v ∈ V F , u ∈ V G has been assigned a label, ζ F (v), ζ G (u) ∈ {0, . . . , 25}, based on the type of operation it represents (see Tables 4 and 5).
In the i-th (out of h in total, here we fix h = 3; see Bartoszuk, 2018 for discussion) iteration of the SimilaR algorithm, we assign new labels ζ i F , ζ i G according to the labels previously considered. (a) Iteration i = 1. In the first iteration, the initial labels, ζ F , ζ G , are simply remapped to consecutive integers. This yields ζ 1 F and ζ 1 G as given in Tables 4 and 5. (b) Iterations i = 2 and i = 3. In subsequent iterations, we seek groups of similar vertices so as to assign them the same label. Two vertices v, u ∈ V F ∪ V G are considered similar (with no loss in generality, we are assuming v ∈ V F and u ∈ V G below), whenever they have been assigned the same label in the previous iteration and have outgoing neighbours with the same labels. However, for greater flexibility, we allow for the neighbourhoods to differ slightly -unmatched neighbours of lesser importance degrees will not be considered significant. Formally, v ∈ V F and u ∈ V G are similar, whenever ζ i−1 F (v) = ζ i−1 G (u) and: denotes the multiset of common neighbours' vertex labels and M F and M G denote the medians of importance degrees of vertices in F and G, respectively. The above similarity relation in obviously reflexive and symmetric. When we compute its transitive closure, we get an equivalence relation whose equivalence classes determine sets of vertices that shall obtain identical labels.
3. Partial similarity degrees. Let m be the maximal integer label assigned above and L i F = (L i F,1 , . . . , L i F,m ) be a vector of label counts, where L i F,j = |{v ∈ V F : ζ i F (v) = j}|, see Table 6. We define L i G in much the same way, see Table 7. We introduce the following "partial" similarity measures of the label sequences -the symmetric: and its nonsymmetric version:μ The partial similarities for i = 1, 2, 3 are given in Table 8. 4. Final similarity degrees. The overall similarity degree is defined as the arithmetic mean of the h = 3 partial similarities (reported in Table 8):      We obtain its nonsymmetric versions in the same way: Having discussed the algorithms behind the SimilaR package, let us proceed with the description of its user interface.

Illustrative examples
The SimilaR package can be downloaded from CRAN and installed on the local system via a call to:

R> install.packages("SimilaR")
Here we are working with version 1.0.8 of the package.
Once the package is loaded and its namespace is attached by calling: R> library("SimilaR") two functions are made available to a user. SimilaR_fromTwoFunctions() is responsible for assessing the similarity between a pair of function objects (R is a functional language, hence assuming that functions constitute basic units of code seem natural). Moreover, SimilaR_fromDirectory(), which we shall use in the next section, is a conveniently vectorised version of the former, performing the comparison of all the scripts in a given directory.

A case study
In the previous section we illustrated that SimilaR is easily able to identify the code chunks that can be transformed onto each other. Now we shall demonstrate its usefulness in a real-world scenario: let us compare the code-base of two R packages: nortest (Gross and Ligges, 2015) and DescTools (Signorell et al., 2020). The former implements five significance tests for normality, while the latter advertises itself as 1. Set-up. First we attach the required packages and set up the directory where we shall store the data that we are going to feed the algorithm with at a later stage.
Here the list of the objects exported by both packages is determined by querying their corresponding package:DescTools and package:nortest environments. Moreover, a call to deparse() on a function object gives a plain-text representation of its source code.
3. Run the algorithm. Now we ask the algorithm to fetch the two source files in the output directory and execute all the pairwise comparisons between the functions defined therein.

R> print(head(results, 10))
For greater readability, the results are reported in Table 9.

Name1 (DescTools)
Name2 ( Table 9: Similarity report (the top 10 results) for the comparison between the code-base of the DescTools and nortest packages.
Discussion. We observe that 5 function pairs were marked as similar (Decision = 1). The top 4 results accurately indicate the corresponding normality tests from the two packages -their sources are identical.
Interestingly, DescTools does provide the AndersonDarlingTest function, but this is a version of the goodness-of-fit measure to test against any probability distribution provided. In other words, a c.d.f. of a normal distribution is not hard-coded in its source, and thus is significantly different from the code of ad.test().
It is also worth noting that there are no false positives in terms of statistical tool types -all the functions deal with some form of goodness-of-fit testing and we recall that DescTools defines ca. 550 functions in total.

Discussion
We have introduced an algorithm to quantify the similarity between a pair of R source code chunks. The method is based on carefully prepared Program Dependence Graphs, which assure that semantically equivalent code pieces are represented in the same manner even if they are written in much different ways. This makes the algorithm robust with respect to the most typical attacks. In a few illustrative examples, we have demonstrated typical code alterations that the algorithm is invariant to, for instance, aliasing of variables, changing the order of independent code lines, unwinding nested function calls, etc.
In the presented case study we have analysed the similarities between the DescTools and nortest packages. Recall that most of the cloned function pairs are correctly identified, proving the practical usefulness of the SimilaR package. The reported above-threshold similarity between CramerVonMisesTest() and ad.test() is -strictly speaking -a false positive, nevertheless our tool correctly indicates that the two functions have been implemented in much the same way. This might serve as a hint to package developers that the two tests could be refactored so as to rely on a single internal functionde-duplication is among the most popular ways to increase the broadly conceived quality of software code.
On the other hand, the algorithm failed to match the very much-different (implementationwise) AndersonDarlingTest() (generic distribution) with its specific version of ad.test() (normal distribution family). However, comparisons of such a kind, in order to be successful, would perhaps require the use of an extensive knowledge-base and are of course beyond the scope of our tool.
Finally, let us note that due to the use of a new polynomial-time algorithm, assessing the similarity of two Program Dependence Graphs is relatively fast. This makes SimilaR appropriate for mining software repositories even of quite considerable sizes. However, some pre-filtering of function pairs (e.g., based on cluster analysis) to avoid performing all the pairwise comparisons would make the system even more efficient and scalable.
Future versions of the SimilaR package will be equipped with standalone routines aiming at improving the quality and efficiency of R code, such as detecting dead or repeated code, measuring cyclomatic complexity, checking if the program is well structured, etc.