Skip to content

GSoC Ideas 2020

Richard Müller edited this page Feb 19, 2020 · 30 revisions

Overview

This document provides an overview of application instructions and project ideas for Google Summer of Code 2020. First, please make sure to inform yourself about Getaviz, the jQA-dashboard, and software visualization in general before applying. Have a look at our online demos and publications. Both are linked in the corresponding READMEs of the projects. If you have any question, feel free to ask one of the mentors or open an issue.

Important: Your proposal should follow our proposal template and provide all the required information.

Every project idea listed below consists of a list of keywords representing the technologies you are most likely will get in touch with during the project. This does not mean you already have to be familiar with them, preconditions are listed separately. Getaviz is a very heterogenous project, containing different components which uses completely different technologies. The keywords will help you to find a project you are interested in, so you can work with the technologies you are enthousiastic about! Further, the focus of most projects can be shifted a bit according to your interests. Just talk to us! The following list gives you an overview which technologies we use in Getaviz:

Hardware: Microsoft HoloLens, HTC Vive, Oculus Rift

Visualization Frameworks: A-Frame, d3

Programming Languages: Java, JavaScript, Ruby

Frameworks and Tools: react, neo4j, jQAssistant

Projects Ideas

HoloLens Support

  • Brief explanation
    Our visualizations are generated in A-Frame at the moment and can be viewed via webbrowser. A-Frame supports the WebVR interface and therefore also runs on HoloLens and HTC Vive out of the box. But at the moment it is not possible to navigate through the visualization or interact with it as it is on the desktop. If you choose this topic, you will extend the A-Frame support so Getaviz will provide the same features on desktop and HoloLens. Finding suitable interaction concepts is a highly creative challenge where you can bring in your own ideas! We try to provide a HoloLens, but at the moment the chances are rather bad. If you have access to a Hololens, for example at your university, this would simplify a lot.
  • Expected results
    The visualization should be depicted properly using the features of AR. It should be possible to navigae somehow and some basic interaction concepts should be implemented, e.g., search bar, filtering elements, highlighting elements
  • Involved Technologies: Java, A-Frame, Microsoft HoloLens
  • Knowledge Prerequisite: Experience with JavaScript. Experience with A-Frame or AR is not necessary.
  • Mentor: David Baum [david.baum(at)uni-leipzig.de]

Extend HTC Vive Support

  • Brief explanation
    Last year, we implemented rudimentary support for HTC Vive using A-Frame. Currently, it is possible to view visualizations with it and navigate using the controllers. Besides that, it is not possible to interact with the visualization as it is on the desktop. If you choose this topic, you will extend the A-Frame support so Getaviz will provide the same features on desktop and HTC Vive. Finding suitable interaction concepts is a highly creative challenge where you can bring in your own ideas!

We can provide a HTC Vive to you, but only at our local virtual reality laboratory. If you can't work here, you'll have to find a HTC Vive yourself, for example at your local university. If possible, we will be happy to help.

  • Expected results It should be possible to navigae somehow and some basic interaction concepts should be implemented, e.g., search bar, filtering elements, highlighting elements
  • Involved Technologies: JavaScript, A-Frame, HTC Vive
  • Knowledge Prerequisite: Experience with JavaScript. Experience with A-Frame or VR is not necessary.
  • Mentor: David Baum [david.baum(at)uni-leipzig.de]

Facilitation of Ruby Parsing

  • Brief explanation
    Currently the Ruby behaviour parser of Getaviz needs the instrumentation of the ruby code to start the tracing at a specific point. It is desireable to provide a possibility for the user of the behaviour parser to determin the start and end of the tracing as an command line argument.
  • Expected results
    • Package structure, behavior and evolution parser as a gem with binaries
    • Provide suitable command line options for all parsers for output files and for git repos as input
    • Provide command line options to the behavior parser to determine either the source code file and line or the class an method where the tracing starts and where it should finish
    • Provide suitable filter options for all parsers
    • Create unit tests for all parsers
  • Involved Technologies: Ruby
  • Knowledge Prerequisite: Experience with at least one high-level language, e.g., Ruby or Java.
  • Mentor: Jan Schilbach [jan.schilbach(at)uni-leipzig.de]

Visualization Wizard

  • Brief explanation
    The jQA-dashboard supports software project managers in decision making. Its data source is an existing neo4j database with structural, behavioral, and evolutionary information of a software project. The dashboard consists of interactive react components where each component supports a certain task, for example hotspot, ownership, or test coverage analysis. Currently, the supported visualizations and queries are hard-coded, but the latter can be adapted locally by a user (expert mode toggle). This configurability strongly meets the needs of users and should be extended.
  • Expected results
    First, the dashboard should be extended to enable the user-defined configuration and generation of custom visualizations (see also GitHub Issue #5). Second, it should be possible to share these visualizations with other users. Hence, the following features should be supported by the wizard:
    • Choose relevant data to be visualized (this step results in a cypher query)
    • Choose a visualization template (table, pie chart, treemap, etc.)
    • Generate the visualization
    • Name and describe the visualization
    • Share the visualization
  • Involved Technologies: JavaScript, react, d3, neo4j
  • Knowledge Prerequisite
    The challenge in this project is to design a suitable architecture and to implement it with react and JavaScript. As you will have to write JavaScript and react code you should already be familiar with JavaScript or react, ideally with both. Experiences with d3 and/or neo4j cypher queries are not necessary, but helpful. It is important that you are willing to learning and want to dive into new technologies.
  • First steps
  • Mentor: Richard Müller [rmueller(at)wifa.uni-leipzig.de]

C# jQAssistant Plugin with Unity Support

  • Brief explanation
    jQAssistant is an open source tool for scanning software artifacts. The scanned software artifacts are stored as a graph in a neo4j database and serve as a data basis for analyses and visualizations with Getaviz or the jQA-dashboard. There are scanners for different programming languages or data sources, for example Java source code, Java bytecode, PHP source code, Github issues, stack traces and others. In order to support the programming language C#, a scanner for jQAssistant has to be developed. Additionally, an Editor extension for Unity shall be written, which executes the scanner against a Unity project, checks defined rules and presents the results to the user.

  • Expected results

    • The goal of this project is to implement a scanner for the programming language C# that stores the information in a neo4j database.
    • It is not the primary goal to develop an own parser or tracer. It is much better to use existing tools and adapt them to produce the required output.
  • Involved Technologies: Java, jQAssistant, neo4j, Unity

  • Knowledge Prerequisite: You should already have experience with Java and Unity. With this project you can deepen your knowledge and get in touch with almost every aspect of the software artifact.

  • Mentor: Richard Müller [rmueller(at)wifa.uni-leipzig.de]

Circle of Related Elements

  • Brief explanation
    In software visualizations the relations between the elements are a critical part in task solving. Because of the huge amount and different kinds they are normally not displayed at all and the user has to decide which of them should be displayed. A simple way to represent relations is drawing lines between the elements. However, in a complex visualization with a lot of elements, the length of the lines become too long to get a suitable overview about the relations. Instead the related elements could be temporarily arranged in a circle around the starting element.
  • Expected results The related elements are temporarily arranged in a circle around the starting element. Leaving this circle with the mouse resets the position of the related elements and the relation is displayed otherwise. Clicking on a related element navigates to the originally position.
  • Involved Technologies: JavaScript, A-Frame
  • Knowledge Prerequisite: Experience with JavaScript. Experience with A-Frame is not necessary but will be helpful.
  • Mentor: Pascal Kovacs [pkovacs(at)uni-leipzig.de]

Magnifiers and Previews

  • Brief explanation
    To detect single elements in a complex visualization is sometimes very tough, because of the huge amount of very small displayed elements. Navigation, like zooming or panning, becomes exhausting over time and come along with losing the overview. In this cases a magnifier can help to detect the elements of interest with lower effort than using navigation. For related elements a preview of their representations as an additional window can also help to avoid unnecessary navigation.
  • Expected results
    The user can display a magnifier over the current mouse position to detect and select small displayed elements. He can also display previews of related elements in additional windows, that also allows navigating to the element by clicking on it.
  • Involved Technologies: JavaScript, A-Frame
  • Knowledge Prerequisite
    Experience with JavaScript. Experience with A-Frame is not necessary but will be helpful.
  • Mentor: Pascal Kovacs [pkovacs(at)uni-leipzig.de]

User-driven Decorative Animations

  • Brief explanation
    Highlighting elements in complex visualizations can focus the user attention to specific points of interest in the task at hand. Decorative animations, like pulsing glow effects, are one way to highlight specific elements to draw the user attention.
  • Expected results
    The user can decide, which decorating animation or which combination of them represents which property of the elements.
  • Involved Technologies: JavaScript, A-Frame
  • Knowledge Prerequisite: Experience with JavaScript. Experience with A-Frame is not necessary but will be helpful.
  • Mentor: Pascal Kovacs [pkovacs(at)uni-leipzig.de]

Multiple Visualizations in the Same UI

  • Brief explanation
    So far, only a single view of a model is visualized at the same time. However, for many purposes it is necessary to simultaneously analyze more than one view of the model or to analyze multiple views of different models, e.g. comparing two versions of the same system.
  • Expected results
    The main result of this idea is, that the UI can handle multiple views of the same model or different models. As a second result it should be possible to couple this views, so an user event in one view has also affects to the other view. For example the selection of one element in the first view selects the same element at the second view, when it exists and when it is visible.
  • Involved Technologies: JavaScript, A-Frame
  • Knowledge Prerequisite: Experience with JavaScript. Experience with A-Frame is not necessary.
  • Mentor: Pascal Kovacs [pkovacs(at)uni-leipzig.de]

Oculus Rift Support

  • Brief explanation
    Getaviz generates A-Frame visualizations. Currently, they can be viewed via browser or HTC Vive. To better support virual reality, the visualizations includung navigation and interaction should be adopted to Oculus Rift as well. Finding suitable interaction and navigation concepts is a highly creative challenge where you can bring in your own ideas! We can provide an Oculus Rift to you, but only at our local virtual reality laboratory. If you can't work here, you'll have to find an Oculus Rift yourself, for example at your local university. If possible, we will be happy to help.
  • Expected results
    Implementation of a basic interaction (search bar, filter elements, highlight elements, view detail information) and navigation concept
  • Involved Technologies: Java, JavaScript, A-Frame, Oculus Rift
  • Knowledge Prerequisite
    This is a very chellenging and explorative project! You should have experience with Java Script. It is very important that you are creative and bring in your own ideas, but also that you are willing to dive into new technologies and try out different solution approaches.
  • Mentor: David Baum [david.baum(at)uni-leipzig.de]

A-Frame Animation Framework

  • Brief explanation
    To facilitate the generation of representative and decorative animations a framework to handle events initiating animations is desirable. Such a framework already exists for X3DOM but is missing for A-Frame.
  • Expected results
    • Implementation of a framework to initiate the different animations supported by the x3dom-animation-framework
    • usage of the same core format for animation events
  • Involved Technologies: Javascript
  • Knowledge Prerequisite: Experience in JavaScript
  • Mentor: Jan Schilbach [jan.schilbach(at)uni-leipzig.de]