mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 11) Show all publications
Masud, A. N., Lisper, B. & Ciccozzi, F. (2018). Automatic Inference of Task Parallelism in Task-graph-based Actor Models. IEEE Access, 6, 78965-78991
Open this publication in new window or tab >>Automatic Inference of Task Parallelism in Task-graph-based Actor Models
2018 (English)In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 78965-78991Article in journal (Refereed) Published
Abstract [en]

Automatic inference of task level parallelism is fundamental for ensuring many kinds of safety and liveness properties of parallel applications. For example, two tasks running in parallel may be involved in data races when they have conflicting memory accesses, or one is affecting the termination of another by updating shared variables. In this article, we have considered a task-graph-based actor model, used in signal processing applications (e.g., baseband processing in wireless communication, LTE uplink processing) that are deployed on many-core platforms, in which actors, task-graphs and tasks are the active entities running in parallel. Actors invoke task graphs, which in turn invoke tasks, and they communicate through message passing, thus creating different kinds of dependencies and parallelism in the application. We introduce a novel May Happen in Parallel (MHP) analysis for complex parallel applications based on our computational model. The MHP analysis consists of (i) data-flow analysis applicable to parallel control-flow structures inferring MHP facts representing pairs of tasks running in parallel, (ii) identification of all direct and indirect communication by generating a context-free grammar and enumerating valid strings representing parallelism and dependencies among active entities, and (iii) inferring MHP facts when multiple task-graphs communicate. Our analysis is applicable to other computational models (e.g. Cilk or X10) too. We have fully implemented our analysis and evaluated it on signal processing applications consisting of a maximum of 36.57 million lines of code representing 232 different tasks. The analysis approximately 7 minutes to identify all communication information and 10.5 minutes to identify 12052 executable parallel task-pairs (to analyse for concurrency bugs) proving that our analysis is scalable for industrial-sized code-bases.

Keywords
May happen in parallel, data flow analysis, actor model, parallel task graph, graph reachability, UML profile
National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-41748 (URN)10.1109/ACCESS.2018.2885705 (DOI)000454857600001 ()2-s2.0-85058163957 (Scopus ID)2169-3536 (ISBN)
Projects
Static Program Analysis for Complex Embedded systemsMOMENTUM: analysis of models towards compilation to predictable embedded real-time and safety-critical applications
Available from: 2018-12-18 Created: 2019-01-25 Last updated: 2019-01-17Bibliographically approved
Lisper, B., Masud, A. N. & Khanfar, H. (2015). Static Backward Demand-Driven Slicing. In: PEPM '15 Proceedings of the 2015 Workshop on Partial Evaluation and Program Manipulation: . Paper presented at ACM Sigplan-Sigact Symposium on Partial Evaluation and Program Manipulation PEPM, 12-18 Jan 2015, Mumbai, India (pp. 115-126).
Open this publication in new window or tab >>Static Backward Demand-Driven Slicing
2015 (English)In: PEPM '15 Proceedings of the 2015 Workshop on Partial Evaluation and Program Manipulation, 2015, p. 115-126Conference paper, Published paper (Refereed)
Abstract [en]

Program slicing identifies the program parts that may affect certain properties of the program, such as the outcomes of conditions affecting the program flow. Ottenstein’s Program Dependence Graph (PDG) based algorithm is the state-of-practice for static slicing today: it is well-suited in applications where many slices are computed, since the cost of building the PDG then can be amortized over the slices. But there are applications that require few slices of a given program, and where computing all the dependencies may be unnecessary. We present a light-weight interprocedural algorithm for backward static slicing where the data dependence analysis is done using a variant of the Strongly Live Variables (SLV) analysis. This allows us to avoid building the Data Dependence Graph, and to slice program statements “on-the-fly” during the SLV analysis which is potentially faster for computing few slices. Furthermore we use an abstract interpretation-based value analysis to extend our slicing algorithm to slice low-level code, where data dependencies are not evident due to dynamically calculated addresses. Our algorithm computes slices as sets of Control Flow Graph nodes: we show how to adapt existing techniques to generate executable slices that correspond to semantically correct code, where jump statements have been inserted at appropriate places. We have implemented our slicing algorithms, and made an experimental evaluation comparing them with the standard PDG-based algorithm for a number of example programs. We obtain the same accuracy as for PDG-based slicing, sometimes with substantial improvements in performance.

Keywords
Static backward slicing, Unstructured control flow, Data flow equations, Computational complexity, Strongly live variable, Abstract Interpretation
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-28103 (URN)10.1145/2678015.2682538 (DOI)000461460400013 ()2-s2.0-84947975458 (Scopus ID)978-1-4503-3297-2 (ISBN)
Conference
ACM Sigplan-Sigact Symposium on Partial Evaluation and Program Manipulation PEPM, 12-18 Jan 2015, Mumbai, India
Projects
APARTS - Advanced Program Analysis for Real-Time SystemsSYNOPSIS - Safety Analysis for Predictable Software Intensive SystemsTOCSYC - Testing of Critical System Characteristics (KKS)
Available from: 2015-06-11 Created: 2015-06-08 Last updated: 2019-06-18Bibliographically approved
Khanfar, H., Lisper, B. & Masud, A. N. (2015). Static backward program slicing for safety-critical systems. In: Lect. Notes Comput. Sci.: . Paper presented at 22 June 2015 through 26 June 2015 (pp. 50-65).
Open this publication in new window or tab >>Static backward program slicing for safety-critical systems
2015 (English)In: Lect. Notes Comput. Sci., 2015, p. 50-65Conference paper, Published paper (Refereed)
Abstract [en]

Static program slicing is a technique to detect the program parts (i.e. the “slice”) of the given program possibly affecting a given property. The technique is of interest for analysing safety-critical software, since it can identify the program parts that may affect various safety properties. Verification efforts can then be directed towards those parts, leading to a more efficient verification process. We have developed a novel method for static backward program slicing. The method works for well-structured programs, as commonly demanded by coding standards for safety-critical software. It utilises the program structure to obtain a highly efficient slicing process, where control dependencies are inferred from the program structure, and the slicing is done on-the-fly concurrently with the data dependence analysis. We have evaluated our method experimentally. For applications that require few slices to be taken, like checking for a set of safety properties, we obtain large speedups as compared with the standard method for static backward program slicing. We have also investigated how the speedup varies with various parameters such as code size, size of the slice relative to the full program, and relative frequency of conditions in the code.

Keywords
Dataflow analysis, Program dependency graph, Program slicing, Strongly live variable, Ada (programming language), Codes (symbols), Data flow analysis, Program processors, Safety engineering, Data dependence analysis, Program dependency graphs, Relative frequencies, Safety critical software, Safety critical systems, Verification process, Application programs
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-29840 (URN)10.1007/978-3-319-19584-1_4 (DOI)2-s2.0-84947918293 (Scopus ID)9783319195834 (ISBN)
Conference
22 June 2015 through 26 June 2015
Available from: 2015-12-04 Created: 2015-12-04 Last updated: 2015-12-04Bibliographically approved
Albert, E., Genaim, S. & Masud, A. N. (2013). On the Inference of Resource Usage Upper and Lower Bounds. ACM Transactions on Computational Logic, 14(3), Article number 22
Open this publication in new window or tab >>On the Inference of Resource Usage Upper and Lower Bounds
2013 (English)In: ACM Transactions on Computational Logic, ISSN 1529-3785, E-ISSN 1557-945X, Vol. 14, no 3, p. Article number 22-Article in journal (Refereed) Published
Abstract [en]

Cost analysis aims at determining the amount of resources required to run a program in terms of its input data sizes. The most challenging step is to infer the cost of executing the loops in the program. This requires bounding the number of iterations of each loop and finding tight bounds for the cost of each of its iterations. This article presents a novel approach to infer upper and lower bounds from cost relations. These relations are an extended form of standard recurrence equations that can be nondeterministic, contain inexact size constraints and have multiple arguments that increase and/or decrease. We propose novel techniques to automatically transform cost relations into worst-case and best-case deterministic one-argument recurrence relations. The solution of each recursive relation provides a preciseupper-bound and lower-bound for executing a corresponding loop in the program. Importantly, since the approach is developed at the level of the cost equations, our techniques are programming language independent.

Place, publisher, year, edition, pages
ACM Press, 2013
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-23899 (URN)10.1145/2499937.2499943 (DOI)000323892900006 ()2-s2.0-84883620006 (Scopus ID)
Available from: 2013-12-27 Created: 2013-12-20 Last updated: 2017-12-06Bibliographically approved
Ben-Amram, A. M., Genaim, S. & Masud, A. N. (2012). On the termination of integer loops. In: Lecture Notes in Computer Science, vol. 7148: (pp. 72-87). Springer, 7148
Open this publication in new window or tab >>On the termination of integer loops
2012 (English)In: Lecture Notes in Computer Science, vol. 7148, Springer, 2012, Vol. 7148, p. 72-87Chapter in book (Refereed)
Abstract [en]

In this paper we study the decidability of termination of several variants of simple integer loops, without branching in the loop body and with affine constraints as the loop guard (and possibly a precondition). We show that termination of such loops is undecidable in some cases, in particular, when the body of the loop is expressed by a set of linear inequalities where the coefficients are from ℤ∪{r} with r an arbitrary irrational; or when the loop is a sequence of instructions, that compute either linear expressions or the step function. The undecidability result is proven by a reduction from counter programs, whose termination is known to be undecidable. For the common case of integer constraints loops with rational coefficients only we have not succeeded in proving decidability nor undecidability of termination, however, this attempt led to the result that a Petri net can be simulated with such a loop, which implies some interesting lower bounds. For example, termination for a given input is at least EXPSPACE-hard.

Place, publisher, year, edition, pages
Springer, 2012
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 7148
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-23900 (URN)10.1007/978-3-642-27940-9_6 (DOI)9783642279393 (ISBN)
Note

13th International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI 2012; Philadelphia, PA; United States; 22 January 2012 through 24 January 2012

Available from: 2013-12-27 Created: 2013-12-20 Last updated: 2014-01-09Bibliographically approved
Ben-Amram, A. M., Masud, A. N. & Genaim, S. (2012). On the termination of integer loops. ACM Transactions on Programming Languages and Systems, 34(4)
Open this publication in new window or tab >>On the termination of integer loops
2012 (English)In: ACM Transactions on Programming Languages and Systems, ISSN 0164-0925, E-ISSN 1558-4593, Vol. 34, no 4, p. -Article number 16Article in journal (Refereed) Published
Abstract [en]

In this article we study the decidability of termination of several variants of simple integer loops, without branching in the loop body and with affine constraints asthe loop guard (and possibly a precondition). We show that termination of such loops is undecidable in some cases, in particular, when the body of the loop is expressed by a set of linear inequalities where the coefficients are from Z?{r} with r an arbitrary irrational; when the loop is a sequence of instructions, that compute either linear expressions or the step function; and when the loop body is a piecewise linear deterministic update with two pieces. The undecidability result is proven by a reduction from counter programs, whose termination is known to be undecidable. For the common case of integer linear-constraint loops with rational coefficients we have not succeeded in proving either decidability or undecidability of termination, but we show that a Petri net can be simulated with such a loop; this implies some interesting lower bounds. For example, termination for a partially specified input is at least EXPSPACE-hard.

Place, publisher, year, edition, pages
ACM Press, 2012
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-23898 (URN)10.1145/2400676.2400679 (DOI)000313658500002 ()2-s2.0-84872372082 (Scopus ID)
Available from: 2013-12-27 Created: 2013-12-20 Last updated: 2017-12-06Bibliographically approved
Masud, A. N. (2012). Termination and Cost Analysis: Complexity and Precision Issues. (Doctoral dissertation). School of Computer Science, Technical University of Madrid
Open this publication in new window or tab >>Termination and Cost Analysis: Complexity and Precision Issues
2012 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

The research in this thesis is related to static cost and termination analysis.Cost analysis aims at estimating the amount of resources that a given programconsumes during the execution, and termination analysis aims at proving thatthe execution of a given program will eventually terminate. These analyses arestrongly related, indeed cost analysis techniques heavily rely on techniques developedfor termination analysis. Precision, scalability, and applicability are essentialin static analysis in general. Precision is related to the quality of the inferredresults, scalability to the size of programs that can be analyzed, and applicabilityto the class of programs that can be handled by the analysis (independently fromprecision and scalability issues). This thesis addresses these aspects in the contextof cost and termination analysis, from both practical and theoretical perspectives.For cost analysis, we concentrate on the problem of solving cost relations (aform of recurrence relations) into closed-form upper and lower bounds, which isthe heart of most modern cost analyzers, and also where most of the precision andapplicability limitations can be found. We develop tools, and their underlyingtheoretical foundations, for solving cost relations that overcome the limitations ofexisting approaches, and demonstrate superiority in both precision and applicability.A unique feature of our techniques is the ability to smoothly handle bothlower and upper bounds, by reversing the corresponding notions in the underlyingtheory. For termination analysis, we study the hardness of the problem ofdeciding termination for a specic form of simple loops that arise in the context ofcost analysis. This study gives a better understanding of the (theoretical) limitsof scalability and applicability for both termination and cost analysis.

Place, publisher, year, edition, pages
School of Computer Science, Technical University of Madrid, 2012. p. 152
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-23902 (URN)
Supervisors
Available from: 2013-12-30 Created: 2013-12-20 Last updated: 2013-12-30Bibliographically approved
Albert, E., Genaim, S. & Masud, A. N. (2011). More precise yet widely applicable cost analysis. In: Lecture Notes in Computer Science, vol. 6538: (pp. 38-53). Springer, 6538
Open this publication in new window or tab >>More precise yet widely applicable cost analysis
2011 (English)In: Lecture Notes in Computer Science, vol. 6538, Springer, 2011, Vol. 6538, p. 38-53Chapter in book (Refereed)
Abstract [en]

Cost analysis aims at determining the amount of resources required to run a program in terms of its input data sizes. Automatically inferring precise bounds, while at the same time being able to handle a wide class of programs, is a main challenge in cost analysis. (1) Existing methods which rely on computer algebra systems (CAS) to solve the obtained cost recurrence equations (CR) are very precise when applicable, but handle a very restricted class of CR. (2) Specific solvers developed for CR tend to sacrifice accuracy for wider applicability. In this paper, we present a novel approach to inferring precise upper and lower bounds on CR which, when compared to (1), is strictly more widely applicable while precision is kept and when compared to (2), is in practice more precise (obtaining even tighter complexity orders), keeps wide applicability and, besides, can be applied to obtain useful lower bounds as well. The main novelty is that we are able to accurately bound the worst-case/best-case cost of each iteration of the program loops and, then, by summing the resulting sequences, we achieve very precise upper/lower bounds. 

Place, publisher, year, edition, pages
Springer, 2011
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 6538
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-23901 (URN)10.1007/978-3-642-18275-4_5 (DOI)9783642182747 (ISBN)
Note

12th International Conference on Verification, Model Checking, and Abstract Interpretation, VMCAI 2011; Austin, TX; United States; 23 January 2011 through 25 January 2011

Available from: 2013-12-27 Created: 2013-12-20 Last updated: 2014-01-09Bibliographically approved
Masud, A. N., Sarwar, G. & Rahman, M. (2005). Coordinating Heterogeneous Peer Databases in P2P Network. In: : . Paper presented at 8th International Conference on Computer and Information Technology (ICCIT) 2005 (pp. 276-280).
Open this publication in new window or tab >>Coordinating Heterogeneous Peer Databases in P2P Network
2005 (English)Conference paper, Published paper (Refereed)
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-23905 (URN)
Conference
8th International Conference on Computer and Information Technology (ICCIT) 2005
Available from: 2013-12-27 Created: 2013-12-20 Last updated: 2015-09-14Bibliographically approved
Masud, A. N., Joarder, M. M. & Tariq-ul-Azam, M. (2003). A general approach to natural language conversion. In: : . Paper presented at IEEE 7th International Multitopic Conference (INMIC) 2003 (pp. 377-281).
Open this publication in new window or tab >>A general approach to natural language conversion
2003 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Natural language processing is a technique that includes both natural language understanding andnatural language generation. Translating one natural language into another becomes complex due tostructural difference, varieties of meanings, different forms of verbs etc. In this paper, a generalalgorithm has been developed which takes one natural language (i.e. English, German etc) as input and produces another natural language (i.e. Bengali, Japanese etc.) as output reserving the same expression. For understanding the input natural language, a smart-parse algorithm has been developed which unambiguously and efficiently generates a stack of the parse tree from the existing structure. After parsing, it incorporates itself with the knowledge base and dictionary and finally produces the corresponding targeted natural language. After fixing up the input and output natural language, it is possible to use this algorithm for translation after simple modification.

National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-23906 (URN)10.1109/INMIC.2003.1416754 (DOI)0-7803-8183-1 (ISBN)
Conference
IEEE 7th International Multitopic Conference (INMIC) 2003
Available from: 2013-12-27 Created: 2013-12-20 Last updated: 2014-05-22Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-4872-1208

Search in DiVA

Show all publications