MadGraph allows users to choose a process for which it will generate the corresponding matrix element squared. In analytical calculations this is done by summing over spin to rewrite the Dirac spinors as momenta of the external legs. However, for a process with lots of diagrams this method will require the analysis of so many terms (rising like the number of diagrams squared) making the computation slow and inefficient. One solution, known as the Helicity Amplitude Method, is to evaluate numerically the amplitude allowing the evaluation time to be proportional to the number of diagrams. This method is performed within MadGraph using the HELAS/ALOHA subroutine. In the Helicity Amplitude Method the numerical evaluation of each amplitude needs to be done for each possible combination of helicity of any initial/final state particles. In MadGraph, this is currently done by a simple loop over all helicity combinations. However during such a loop, the same spinor/polarization vector will be recomputed multiple times for the same helicity of a given particles. The same type of identical recomputation occurs as well for some of the propagator wavefunctions which typically depends only on a subset of the helicity configuration. The purpose of this project is to exploit this fact to optimise the code. Instead of calculating the spinors multiple times the plan is to calculate all of them once and then store their values to RAM. This technique is known as recycling. MadGraph already has some optimisation in this direction but only for identical propagators between different Feynman Diagram. The drawback of recycling helicity is that it complexifies the way to filter out the computation of helicity combinations which are exactly vanishing. The current version of MadGraph uses an on-the-flight filter to avoid such useless combinations. This will not be possible in the presence of recycling and study is required on how to determine in advance which set of helicities are vanishing in order to achieve full optimisation. Preliminary helicity recycling has been implemented for e+e− → tt to test exactly how much time can be saved. The results are presented in the table below (in such a table around 4s is taken by initialisation and generation of the 10 million phase-space points). Standard code : 0m27.278s Helicity recycling: 0m23.966s Helicity recycling with filtering: 0m17.357s Hence, when filtering is included we see a nearly 40% increase in speed. Such a speed-up is expected to increase (probably exponentially) with the multiplicity of the final state, however the amount of RAM used will also increase (probably in the same proportion) and a trade-off needs to be used at large multiplicity. During these four months, the focus will be to implement such a helicity recycling method within the LO gridpack used by CMS/ATLAS for generating huge samples of events. The two experiments request to have at least a speed-up of 15% in order to be able to generate samples for the high-luminosity run. This optimization is likely to surpass that request by a huge factor. Depending on the success of this project this can be included in other types of LO generation as well as in NLO computations since those are dominated by the evaluation of the real emission diagrams.