[Archive] Questions regarding charging analysis with NanoFEEP thrusters in LEO

Message by Martin Tajmar:
Hello everyone,
I am currently trying to study the surface charging of a 1U CubeSat, equipped with 4 NanoFEEP thrusters, in a LEO environment. Considering the local plasma population and the ion thrusters, what I want to determine in the end is whether the satellite needs a neutralizer or not, by monitoring its surface floating potential when a thruster is turned on. The goal of this neutralizer would not be to interact with the thruster, as usually seen with FEEP thrusters, but only to discharge the CubeSat if found necessary. Theoretically, in such an environment, the neutralizer should be useless, but I have to make sure by running a simulation.
As this is my first time using the SPIS software, and having no knowledge of java, I am a bit lost. As of today, I have made a simple model of the 1U CubeSat with Gmsh. I have modelled the different surfaces depending on the material used, as well as circular surfaces corresponding to the NanoFEEP thrusters. The meshing of the surface, the environment volume and its boundary is done, and I have tweaked some of the local and global parameters as well.
First, I am struggling with the setup of the sources (the thrusters). Should I set the SourceId to 1 in local parameters so that it corresponds to the Source1 in the global parameters, and adopt all of its characteristics (sourceType1, sourceParticleType1, …)? I have already set sourceFlag1 to 1 but I’m not sure that there is a link between the local sourceId and the global parameters of the artificial source No 1.
Also, the propellant used for the NanoFEEPs is Gallium, so I would have to create a new particle type. From what I have read in the documentation, there is no other way to do that than by modifying the java files and recompiling them, but I don’t know how to do that and I’m afraid to make a mess. Is there perhaps a more user-friendly method?
I was also wondering if defining surfaces as sources was a problem. I don’t see why, especially because I don’t want to model the thrusters in detail, I just want to study their effect on the spacecraft charging. Following this logic, I also didn’t care for the sourceType, I just want to setup a simple flow of ions and don’t care for its distribution. But maybe I was wrong to do all that?
I tried to run a simulation with the current parameters, replaced Gallium with Cesium ions, left some parameters to default value etc… just to see if it runs smoothly without caring for the results, but here is what I get when I launch the simulation :
20000 Tue Jun 23 10:28:10 CEST 2015 WARNING: Conjugate Gradient (for Poisson Eq. solving) did not reach convergence within 1000 iterations.
20000 Tue Jun 23 10:28:10 CEST 2015 Error = 459052.34 while absolute tolerance was 4231.4883
20000 Tue Jun 23 10:28:10 CEST 2015 WARNING: Conjugate Gradient (for Poisson Eq. solving) did not reach convergence within 1000 iterations.
20000 Tue Jun 23 10:28:10 CEST 2015 Error = 1896277.8 while absolute tolerance was 2565.542
20000 Tue Jun 23 10:28:10 CEST 2015 WARNING: Conjugate Gradient (for Poisson Eq. solving) did not reach convergence within 1000 iterations.
20000 Tue Jun 23 10:28:10 CEST 2015 Error = 1539131.0 while absolute tolerance was 311320.53
20000 Tue Jun 23 10:28:10 CEST 2015 WARNING: Conjugate Gradient (for Poisson Eq. solving) did not reach convergence within 1000 iterations.
20000 Tue Jun 23 10:28:10 CEST 2015 Error = 1.0803276E7 while absolute tolerance was 17657.227
20000 Tue Jun 23 10:28:11 CEST 2015 WARNING: Conjugate Gradient (for Poisson Eq. solving) did not reach convergence within 1000 iterations.
20000 Tue Jun 23 10:28:11 CEST 2015 Error = 2.8413844E7 while absolute tolerance was 1191108.1
20000 Tue Jun 23 10:28:11 CEST 2015 WARNING: Conjugate Gradient (for Poisson Eq. solving) did not reach convergence within 1000 iterations.
20000 Tue Jun 23 10:28:11 CEST 2015 Error = 2.048902E7 while absolute tolerance was 912389.3
20000 Tue Jun 23 10:28:11 CEST 2015 WARNING: Conjugate Gradient (for Poisson Eq. solving) did not reach convergence within 1000 iterations.
20000 Tue Jun 23 10:28:11 CEST 2015 Error = 4.29275E7 while absolute tolerance was 1726714.9
20000 Tue Jun 23 10:28:12 CEST 2015 WARNING: Newton did not reach convergence
I don’t really know where to look to make it work, so I would need some help there too. I tried increasing the iter parameters in Poisson equation tab, but I get the same kind of response with others messages such as :
20000 Tue Jun 23 10:34:30 CEST 2015 WARNING: last SC local surface potential change was huge: 1103.9651V maximum change, and 1103.9529V average change
20000 Tue Jun 23 10:34:30 CEST 2015 it may be due to too small a SC capacitance (1.0E-6F) or too large a time step (last one = 4.018413E-30s)
20000 Tue Jun 23 10:34:31 CEST 2015 advance: 28 particles out of 216 eliminated because not in the right tetrahedron or zone (to compare to 15 injected particles)
Once again, it’s my first time using the software so I am probably not looking where I should and have made some mistakes, that’s why I come to you for help. Here are my current global parameters :
\http://www.mediafire.com/view/h2lswu5ff0jjbzp/globalParameters-Cubesat.xml
I can also post the whole project here if anyone is interested or is willing to provide help. Thank you all
PS: I am French so I am ok if you want to answer in French instead of English, I just thought the topic could be useful for english-speaking members

Message by Jean-Charles Mateo-Velez:
Hi,
your post is complex to treat as a whole but here is a first tick.It seems you have appropriately set local and global parameters but your definition of the ion flux may be wrong. You should define all fluxes in unit A/m2. Too large a current may lead to huge potential instability and code divergence. You may also start with a fixed potential cubesat by selecting electricalcircuitIntegration = 0 in global parameters.
JCharles

Message by Martin Tajmar:
Thank you for your answer Jean-Charles.
Instead of trying to setup everything at once, I tried running a simple simulation with minimal parameters like you suggested. It went through and I got results, but I still got some questions that bother me.
First, in the debug file, it says “spacecraft surface : 61519.664m2”. When I read the Gmsh documentation, it was clearly stated that units were not fixed and that values shouldn’t necessarily be put in meters. So for convenience, because the dimensions of my CubeSat are very small (10x10x10 cm), I put them all in millimeters. But it seems to have been interpreted in meters by the SPIS software, judging by the huge surface. So is there a way to tell SPIS to use millimeters for the Gmsh model, or do I have to edit all the values of the Gmsh .geo file and apply a 10^(-3) factor to all of them?
I am concerned that because the surface is so huge, results won’t be the same as if millimeters had been considered instead of meters.
Also, after completing the simulation, everything that’s related to the source1 (the 4 NanoFEEP thrusters, or at least what I’m trying to setup as this source) stays at 0. There are files specific to this source1 that have been created and not for other sources (1.1, 2, etc), so I guess the source is correctly activated (at least I put the SourceFlag1 to 1), but still everything that’s related to this source stays at 0.
I tried visualizing other data, such as ions/electrons charge densities, and they all seem to be taken account of. So I’m wondering what I missed to activate the thrusters in the software. Maybe I didn’t link the thrusters surfaces (in local parameters) correctly with the source1 of global parameters?
Here are the local parameters that I have changed: \http://s9.postimg.org/3ttcaio0f/local.png
I have left everything by default for this first simulation, except for the source parameters of the thrusters and the background density of the computational volume (but I think this last one is not even used by the software).
I also uploaded the global parameters xml file in case someone can spot a mistake in there: \http://www.mediafire.com/view/dey6h922ck9nn5y/GlobalParamsLEONanoFEEP.xml
Thanks for the help

Message by Martin Tajmar:
I still have not found the solution to my problems, but I was wondering if there were some videos or tutorials for thrusters integration in SPIS? Quite a lot of ESA conferences powerpoint presentations are available, but they never detail the process in SPIS and only show the results. So is there maybe a way to access the files used for these simulations, or their parameters, videos of the conferences, etc? I have not found any though.

Message by Martin Tajmar:
Hello,
I have managed to simulate an ion thruster, and this time the results corresponding to this source are well taken into account.
Though, I did not succeed to make the electrical circuit work. I want to study the floating potential, so I have to set electricCircuitIntegrate to 1. In the editor, I have tried leaving it blank or also linking all my nodes together with a 0V bias (V 0 1 0.0 and V 0 2 0.0). In any case, each time I put electricCircuitIntegrate=1, the simulation starts, but gets stuck to 0% while writing thousands of lines of this:

20000 Wed Jul 15 17:44:56 CEST 2015 CGS solver fail to solve the system => try with Gauss solver (longer but exact)
20000 Wed Jul 15 17:44:56 CEST 2015 CGS solver fail to solve the system => try with Gauss solver (longer but exact)
20000 Wed Jul 15 17:44:56 CEST 2015 CGS solver fail to solve the system => try with Gauss solver (longer but exact)
20000 Wed Jul 15 17:44:56 CEST 2015 CGS solver fail to solve the system => try with Gauss solver (longer but exact)
20000 Wed Jul 15 17:44:56 CEST 2015 CGS solver fail to solve the system => try with Gauss solver (longer but exact)
20000 Wed Jul 15 17:44:56 CEST 2015 CGS solver fail to solve the system => try with Gauss solver (longer but exact)
20000 Wed Jul 15 17:44:56 CEST 2015 CGS solver fail to solve the system => try with Gauss solver (longer but exact)
20000 Wed Jul 15 17:44:56 CEST 2015 CGS solver fail to solve the system => try with Gauss solver (longer but exact)
20000 Wed Jul 15 17:44:56 CEST 2015 CGS solver fail to solve the system => try with Gauss solver (longer but exact)
20000 Wed Jul 15 17:44:56 CEST 2015 after SC integration TIME: 2.1365064E-10, dt = 2.1365064E-10
20000 Wed Jul 15 17:44:56 CEST 2015 Simulation time step control : dt is 2.2839507E-10 s
20000 Wed Jul 15 17:44:56 CEST 2015
20000 Wed Jul 15 17:44:56 CEST 2015 Time: 2.1365064E-10, Dt = 2.2839507E-10 s
20000 Wed Jul 15 17:44:56 CEST 2015
20000 Wed Jul 15 17:44:56 CEST 2015 after SC integration TIME: 4.420457E-10, dt = 2.2839507E-10
20000 Wed Jul 15 17:44:56 CEST 2015 Simulation time step control : dt is 3.8726997E-10 s
20000 Wed Jul 15 17:44:56 CEST 2015
20000 Wed Jul 15 17:44:56 CEST 2015 Time: 4.420457E-10, Dt = 3.8726997E-10 s
20000 Wed Jul 15 17:44:56 CEST 2015
20000 Wed Jul 15 17:44:57 CEST 2015 after SC integration TIME: 8.2931567E-10, dt = 3.8726997E-10
20000 Wed Jul 15 17:44:57 CEST 2015 Simulation time step control : dt is 4.7820004E-10 s
20000 Wed Jul 15 17:44:57 CEST 2015
20000 Wed Jul 15 17:44:57 CEST 2015 Time: 8.2931567E-10, Dt = 4.7820004E-10 s
20000 Wed Jul 15 17:44:57 CEST 2015
20000 Wed Jul 15 17:44:57 CEST 2015 after SC integration TIME: 1.3075158E-9, dt = 4.7820004E-10
20000 Wed Jul 15 17:44:57 CEST 2015 Simulation time step control : dt is 5.717487E-10 s
20000 Wed Jul 15 17:44:57 CEST 2015
20000 Wed Jul 15 17:44:57 CEST 2015 Time: 1.3075158E-9, Dt = 5.717487E-10 s
20000 Wed Jul 15 17:44:57 CEST 2015
20000 Wed Jul 15 17:44:57 CEST 2015 after SC integration TIME: 1.8792645E-9, dt = 5.717487E-10
20000 Wed Jul 15 17:44:57 CEST 2015 Simulation time step control : dt is 6.795951E-10 s
20000 Wed Jul 15 17:44:57 CEST 2015
20000 Wed Jul 15 17:44:57 CEST 2015 Time: 1.8792645E-9, Dt = 6.795951E-10 s
20000 Wed Jul 15 17:44:57 CEST 2015
20000 Wed Jul 15 17:44:57 CEST 2015 after SC integration TIME: 2.5588596E-9, dt = 6.795951E-10
20000 Wed Jul 15 17:44:58 CEST 2015 Simulation time step control : dt is 8.0412094E-10 s
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 Time: 2.5588596E-9, Dt = 8.0412094E-10 s
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 after SC integration TIME: 3.3629806E-9, dt = 8.0412094E-10
20000 Wed Jul 15 17:44:58 CEST 2015 Simulation time step control : dt is 9.466209E-10 s
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 Time: 3.3629806E-9, Dt = 9.466209E-10 s

Although maybe it is because the computation with floating potential is really, really longer than with a fixed potential? I don’t know… Anyway, when I pause the simulation, I get something like this:

20000 Wed Jul 15 17:44:58 CEST 2015 |


… Simulation paused. Please WAIT during results saving 4.3096016E-9 s.
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 In this time interval:
20000 Wed Jul 15 17:44:58 CEST 2015 COLLECTED current:
20000 Wed Jul 15 17:44:58 CEST 2015 -96.782166 A total,
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 -82.98523 A on node0,
20000 Wed Jul 15 17:44:58 CEST 2015 -13.729657 A on node1,
20000 Wed Jul 15 17:44:58 CEST 2015 -0.06725071 A on node2,
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 EMITTED current:
20000 Wed Jul 15 17:44:58 CEST 2015 -0.7948709 A total,
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 -0.73665714 A on node0,
20000 Wed Jul 15 17:44:58 CEST 2015 -0.067436054 A on node1,
20000 Wed Jul 15 17:44:58 CEST 2015 0.009222197 A on node2,
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 Time 4.3096016E-9 s => POTENTIALS:
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 -0.41715133 V on SC ground (node0),
20000 Wed Jul 15 17:44:58 CEST 2015 -0.42442858 V on node1 ground,
20000 Wed Jul 15 17:44:58 CEST 2015 -0.46452633 V on node2 ground,
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 -0.41715103 V on top of node0,
20000 Wed Jul 15 17:44:58 CEST 2015 -0.42442307 V on top of node1,
20000 Wed Jul 15 17:44:58 CEST 2015 -0.46452647 V on top of node2,
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 -0.41715133 Vmin onTopOfNode0,
20000 Wed Jul 15 17:44:58 CEST 2015 -0.42442858 Vmin onTopOfNode1,
20000 Wed Jul 15 17:44:58 CEST 2015 -0.46452633 Vmin onTopOfNode2,
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 -0.41715133 Vmax onTopOfNode0,
20000 Wed Jul 15 17:44:58 CEST 2015 -0.42442858 Vmax onTopOfNode1,
20000 Wed Jul 15 17:44:58 CEST 2015 -0.46452633 Vmax onTopOfNode2,
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 Monitoring numerics of simulation at time 4.3096016E-9
20000 Wed Jul 15 17:44:58 CEST 2015 PICVolDistrib ions1: 104430 particles, last numerical time step used = 9.466209E-10s (max allowed = 1.0E-4s)
20000 Wed Jul 15 17:44:58 CEST 2015 WARNING: GlobalMaxwellBoltzmannVolDistrib elec1 may be inaccurate due to a locally positive potential, of maximum 0.79018366V (to be compared to temperature 0.233eV)
20000 Wed Jul 15 17:44:58 CEST 2015 => at positive potential locations, current and density are proportional to (1+e.pot/k.Te), which is an OML approximation for current and simply wrong for density
20000 Wed Jul 15 17:44:58 CEST 2015 histogram of potential: number per interval between min = -2.8813608 and max = 0.79018366 (unit V):
20000 Wed Jul 15 17:44:58 CEST 2015 1.0,
20000 Wed Jul 15 17:44:58 CEST 2015 3.0,
20000 Wed Jul 15 17:44:58 CEST 2015 4.0,
20000 Wed Jul 15 17:44:58 CEST 2015 1.0,
20000 Wed Jul 15 17:44:58 CEST 2015 0.0,
20000 Wed Jul 15 17:44:58 CEST 2015 0.0,
20000 Wed Jul 15 17:44:58 CEST 2015 1992.0,
20000 Wed Jul 15 17:44:58 CEST 2015 4681.0,
20000 Wed Jul 15 17:44:58 CEST 2015 278.0,
20000 Wed Jul 15 17:44:58 CEST 2015 2.0
20000 Wed Jul 15 17:44:58 CEST 2015 PICVolDistrib photoElec: 46604 particles, last numerical time step used = 9.466209E-10s (max allowed = 1.0E-5s)
20000 Wed Jul 15 17:44:58 CEST 2015 PICVolDistrib source1: 896 particles, last numerical time step used = 9.466209E-10s (max allowed = 1.0E-6s)
20000 Wed Jul 15 17:44:58 CEST 2015 PICVolDistrib secondElec True from ambiant electrons: 79 particles, last numerical time step used = 9.466209E-10s (max allowed = 1.0E-5s)
20000 Wed Jul 15 17:44:58 CEST 2015 PICVolDistrib secondElec BS from ambiant electrons: 0 particles, last numerical time step used = 9.466209E-10s (max allowed = 1.0E-5s)
20000 Wed Jul 15 17:44:58 CEST 2015 Last local volume potential change: 0.04750371V maximum change, and 0.0117688365V average change
20000 Wed Jul 15 17:44:58 CEST 2015 (last plasma time step = 9.466209E-10s)
20000 Wed Jul 15 17:44:58 CEST 2015 SC local surface potential change: 0.04785207V maximum change, and 0.04066977V average change
20000 Wed Jul 15 17:44:58 CEST 2015 SC capacitance = 1.0E-6F, last time step = 9.466209E-10s
20000 Wed Jul 15 17:44:58 CEST 2015
20000 Wed Jul 15 17:44:58 CEST 2015 | SPIS numerical simulation |
20000 Wed Jul 15 17:44:58 CEST 2015 | Task durations
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Simulation integration | Cumulative duration : 3 SECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Pause | Cumulative duration : 0 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Plasma | Cumulative duration : 2762 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Plasma/SC Interactions | Cumulative duration : 171 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task SC Circuit | Cumulative duration : 266 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Results storing | Cumulative duration : 420 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Transitions | Cumulative duration : 0 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Instruments | Cumulative duration : 251 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |

20000 Wed Jul 15 17:44:58 CEST 2015 |
Plasma subtasks
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Poisson Solver | Cumulative duration : 702 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Move all populations | Cumulative duration : 1731 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | At population level

20000 Wed Jul 15 17:44:58 CEST 2015 | Task Injection of ions1 | Cumulative duration : 0 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Push of ions1 | Cumulative duration : 843 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Move of ions1 | Cumulative duration : 1060 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Injection of photoElec | Cumulative duration : 94 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Push of photoElec | Cumulative duration : 157 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Move of photoElec | Cumulative duration : 391 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Injection of source1 | Cumulative duration : 0 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Push of source1 | Cumulative duration : 0 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Move of source1 | Cumulative duration : 47 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Injection of secondElec True from ambiant electrons | Cumulative duration : 16 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Push of secondElec True from ambiant electrons | Cumulative duration : 0 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Move of secondElec True from ambiant electrons | Cumulative duration : 124 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 | Task Injection of secondElec BS from ambiant electrons | Cumulative duration : 15 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |
Task Push of secondElec BS from ambiant electrons | Cumulative duration : 0 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |-- Task Move of secondElec BS from ambiant electrons | Cumulative duration : 109 MILLISECONDS
20000 Wed Jul 15 17:44:58 CEST 2015 |

20000 Wed Jul 15 17:44:58 CEST 2015 |


… Simulation paused and results stored at t = 4.3096016E-9 s.
20000 Wed Jul 15 17:44:58 CEST 2015 |
Simulation paused at t = 4.3096016E-9 s.
20000 Wed Jul 15 17:44:58 CEST 2015 |
Simulation still paused. Duration may be changed now.

So we can see that the potential has already changed (ground was set to -0.1V at the beginning), but still I don’t understand why the time steps are so tiny. When I run the exact same simulation with electricCircuitIntegrate=0, the time steps are correctly set to the ones I put in the parameters, and the simulation doesn’t take long to compute (at least it doesn’t get stuck to 0%). With the integration of the electric circuit, it seems it will never end…

I also had issues when I tried to give the right dimensions to the CubeSat. As I said before, the SPIS solver seems to be interpreting Gmsh values in SI units, so meters, thus initially leading to a 60 000m² surface of the spacecraft. I edited the .geo file and scaled everything to the right dimension -> this time I get the right surface, and the mesh still has the same number of tetrahedra, faces, etc. BUT the simulation won’t run and the console (windows console, not SPIS log console) writes this java.langOutOfMemoryError: Java heap space:
\http://s21.postimg.org/sc4llugp3/Nano_FEEPunitmm_error.jpg
I don’t understand why, by just scaling the dimensions of the satellite without changing anything else, the simulation doesn’t run anymore and needs more memory.
So what am I doing wrong for both problems? I am really stuck here. Thanks for your help

Message by Christian Imhof:
Hi Martin,
here just some suggestions/hints from my experience with the software.

  1. In order to accelerate the simulation when a floating body is simulated you can turn to the parameter "ValaidityRenormalization" to be found in the "Spacecraft" Tab (the higher the larger the time steps will get). With this parameter you can influence the adaptive time step algorithm which is automatically activated if you are simulating a floating satellite. Just make sure in this case that the maximum allowed time step which is set in the simulation control tab is not too large.
2. From the Num-Log entries I can see that you have activated the PIC modelling of emitted secondaries from the satellite. I do not think that this is necessary in the kind of simulation you want to perform. So set the corresponding parameters to be found in the surface interactions Tab to 1 or 0. I would suggest 0 in order to keep the simulation simple. If everything goes well then you can activate them using 1 for the final run of your analysis. 3. The SPIS software allocates a fixed amount of memory defined in the SPIS.bat file or SPIS\_GEO.bat file. The pre defined value is 1 GB which is quite small. You can simply edit the .bat files in order to increase the maximum allowed memory. Look for the calls of the java.exe with the parameter -1024M behind and change the number to higher value depending on the amount of RAM installed on your machine. 4. Make sure that your ions are corretly emitted/modelled. I have had some cases where due to local potential barriers arising in the plasma potential most of the emitted particles where pushed back and re-collected by the surface. If you are using the Maxwellian model for the thruster try to increase the mach number or try to go for the thruster model using an external tabulated file for the definition. However, my experience with thusters modelling is back with the old version 4.3.3 of the software. You should check the 3D results for the plasma potential as well as the particle density of the emitted species. 5. For LEO make sure that your ambiant electrons are modelled using the Maxwell-Boltzmann model since this can speed up the simulation. Please go for the non-linear Poisson solver in this case since it is more robust in this context. 6. For floating satellites the circuit solver can be a rather big impact into the total simulation time. This can be speed up by using only conducting surfaces on the satellite. This should be the inital try and if the thruster modelling goes well in this case you can include the dielectric surfaces in a second step. I hope that I could give you some helpful/valuable hints in order to progress with your simulation task. Greetings Christian

Message by Martin Tajmar:
Hi Christian,
I’m going to reply to each of your suggestions to make it more readable!

  1. This first hint seems pretty useful, I will definitely try it out for faster simulations. I read that the satellite CSat could also serve as a convergence speed parameter, since the more higher it is, the faster the results will be obtained.
2. Of course I eventually figured that out, it has been some time since my last message and I figured, at least for first simulations, that I would not need such a "fancy" thing. Thank you anyway for the tip. 3. That actually was one of the the only files I did not open to search for that parameter! I even tried to open every jar file, but didn't think about the launcher... Thanks a lot, that has solved the heap size error. But still, now I am often getting an error after an hour or so of simulation: "java.lang.OutOfMemory: GC overhead limit exceeded". Any idea how to solve that one?.. 4. Since I do not have really big potential differences, this is not a problem. Moreover, when I used the MaxwellianThruster model, I put a Mach number of 80, so I guess it was plenty enough (at least the satellite did not collect any particle emitted by the source, I verified in the instruments). Now, I am using the Axisym model, and the floating potential of the CubeSat has not changed, so I think there is no problem here. 5. Obviously I did, as suggested in the documentation. 6. I also figured that out in the end, when I got simulations that ran in minutes for full conductive surfaces, the computation time with dielectric materials seemed really, really longer. I abandoned it and just either put full conductive surfaces or totally non-conductive ones. Anyway, thank you very much for your help, your answer was very useful.

Message by Christian Imhof:
Hi Martin,
nice to hear that some of the comments could help you with your SPIS project.
You have mentioned that you wanted to tune the CSat parameter for a speed up of the simulation. I do not think that this will be very useful. Of course the steady state of the charging will be reached faster (in a shorter time span) for small CSat, however, this does not necessarily means that your simulation will also produce the results faster. Since the automatic time steooing scheme is using the potential changes on the satellite this will automatically lead to shorter time steps for a small CSat. I would really suggest to use a quite realistic value and then try to tune the simulation in the way I described below. If you are only interested in steady state I would suggest to stay with the initial CSat and use the renormalisation parameter for speed up. Anyway you should take into account that run can really last several hours up to one or two days.
Regarding the memory problems, you can slways have a look on the numerical log and there on the number of superparticles for the different species which are one of the main drivers of the memory demand. The number of partivles usually varies during the simulation which can increase the memory demand. If this gets very large you can try to decrease the parameter for the average number of superparticles per cell. Another way would be to check the meshing again and maybe go for a coarser mesh on the satellite and especially the outer boundary. Or even further increase the allowed memeory for SPIS. In my satellite models I usually have memory demands on the order of 3 to 5 GB. I guess for your small Cubesat this should be well below this demand.
Greetings
Christian

Message by Martin Tajmar:
Hello,
I fixed the memory errors by putting smaller time steps, now I can run simulations of any duration without SPIS crashing since it uses less heap space.
I have other problems though. I wanted to use the TransientArtificialSources transition class to see the floating potential the satellite would reach with different emitted currents from an artificial source (this way I wouldn’t have to run 100 different simulations with 100 different currents…). I used a MaxwellianThruster at first, and got an error saying “SourceFluxUpdater not possible to generate for source1 because this population in flux is not of type class spis.Surf.SurfDistrib.FluidSurfDistrib”. But… MaxwellianThruster IS a FluidSurfDistrib, right? In the Java API of the software, it is defined like this:

java.lang.Object
extended by spis.Surf.SurfDistrib.SurfDistrib
extended by spis.Surf.SurfDistrib.NonPICSurfDistrib
extended by spis.Surf.SurfDistrib.FluidSurfDistrib
extended by spis.Surf.SurfDistrib.LocalMaxwellSurfDistrib
extended by spis.Surf.SurfDistrib.MaxwellianThruster

I also tried with an AxisymTabulatedVelocitySurfDistrib source but I get the same error message. Why??? This kind of trouble is not mentioned in “User Manual Annex 2: Advanced use for scientific applications” or anywhere else, and I would really need this to work!

Message by Pierre Sarrailh:
Dear Martin,
This is clearly a bug.
Can you precise the SPIS version you use ? 5.1.X ?
I will create an new issue in the Bug tracker of SPIS 5.
Pierre

Message by Martin Tajmar:
Hello Pierre,
I am using SPIS 5.1.8 on Windows7 64b.

Message by Sébastien Hess:
Dear Martin,
This bug is related to the introduction of unmeshed wires in SPIS 5. It will be corrected in the next release of SPIS.
Regards,
Sébastien