How To Build Conceptual Models To Solve Big Complex Problems
Knowing the limits of what you don’t know is surprisingly useful.
This article explains how you can take guesses in a methodical way then design a solution to any problem, no matter how big it may seem and how little you know about it. Knowing what you don’t know is surprisingly useful, it might even be your greatest asset.
Designing a system to transport freight containers between interplanetary orbits takes more than a single skill set. It will take five types of engineering and three types of trades approximately 24 months to complete two iterations on top of the Mk.0 design before reaching a final mass production design. One person can make a valid conceptual model by approximation then breaking down each successive problem and estimating an answer. Once a body of knowledge is built, a project feasibility study and business case can be made with a measurable confidence. I could invest a lifetime learning the requisite skills to prove the interorbital transport system viability before public release or focus on my primary skill set and plan to hire professionals to solve the third layer of problems.
Approximation is a first pass method for building proof of concept, by demonstrating that one objects scale is an order of magnitude different, elements can be intuitively accepted or rejected without distinct proof. i.e. the problems are too small to have significant impact and you have a whole lot more power at your disposal. The freight shipping concept was developed from consideration of the magnitude differences in particle acceleration and orbital mechanics. Just as large electromagnets can accelerate and control the trajectory of a powerfully charged particle, could they work on a larger uncharged object on a shorter journey, say a cargo container shipped along a corridor to Mars?
If accelerating a subatomic particle around a 27 kilometer track is to be accepted as a starting point for accelerating a cargo vehicle along an interplanetary arc then some proof by approximation is needed before digging into complicated math. Concept development and the resultant model determines the requirements to support the idea, which can be broken down to either possible or impossible.
Taking the principles of the idea, trimming the irrelevant parts and including those that could apply is simplistic but effective for building intuitive understanding. For the approximation of particle acceleration technology applied to interplanetary shipping, magnitudes of component dimension, energy output and field quality factors are required first. Electromagnetic field quality is a key factor in maintaining accurate orbital trajectory for particles in an accelerator, so the same will apply for passenger vehicles on interplanetary arcs if the dimension and energy requirements are met.
The Large Hadron Collider at CERN accelerates beams of protons with a tiny radius of 8.7*10^-16 m orbiting in a loop of 27 km (Russenschuck, 2010, Ch.1). When compared against cargo vehicles approximately 2 meters in height, traveling a mean distance of 228m km to Mars there is an appreciable order of magnitude difference. In our instance, the cargo is 2*10⁰ m high on a journey of 2.28*10¹¹ m, a magnitude difference of 10¹¹. CERN’s 8.7*10^–16 m wide particle is on a 2.7*10⁴ m trip around the LHC, a 10²⁰ magnitude difference. This leads to the conclusion that a number of factors like energy output or field quality can be acceptably lower while maintaining accurate interorbital targeting and trajectory.
The core requirements of the LHC electromagnet assembly are energy input rate and superconductivity conditions, needed to generate fields of the required strength within the available power supply limits. Superconductivity is a condition of virtually zero electrical resistance in a material as electrons unlock a shortcut beneath a critical temperature. Cooling of the electromagnet assembly beneath the critical temperature is energy intensive at ground level but less so in High Earth Orbit at the edge of space. This leads to another order of magnitude consideration, if the magnet strengths required are identical then it is impossible to embed an electromagnet assembly of the CERN quadrupole scale in one satellite. The next dependent conclusion is that the magnet can be scaled down to meet the dimension requirements, but what if we could distribute it between many satellites?
At a first pass, we need to build distributed electromagnet assemblies following the design principles of the LHC while optimising maximum potential strength and power storage capacity. The decreased field quality requirement is used later to determine how accurate the vectors created by the individual electromagnet assemblies in each satellite must be. Vector additive propulsion is adding small impulses together that combine into a unified force and direction. The generation of propulsion vectors from a whole swarm of satellites with high field pulsed superconductive solenoid assemblies might be approximately strong enough to move freight contains. If the design is limited to components have already been built, only a conceptual model and correct assembly is required.
This approach of problem, solution, second layer problem, second iteration is how children learn to use new apps and adults build underground mines or develop new technologies. The Plan / Do / Check / Act (PDCA) cycle is well documented, has numerous iterations of its core terminology and the principles are inherent in our very nature. In past eras when we tried to grow tomatoes and a crop failed, we would refine our approach and try again, solving each further problem encountered until success. This approach is as old as Homo Sapiens and is suggested to be the key to our success, building ever better tools before conquering our environment and soon, space.
In a previous career path as a mining engineer, training is based on PDCA principles to develop potentially billion dollar designs with a tremendous number of unknown factors to extract a mineral ore body that no one can see. To do this, industry standards were developed to communicate levels of confidence based on confirmed knowledge and the likelihood of unknown factors. The same thought process is used to build risk management plans and estimate the chance of something else going wrong once mitigations are in place, which is a lot more familiar to most people than project feasibility studies or mine design.
By establishing a point of confirmed knowledge, ie the grade of ore in a drilling sample, the next unknown in sequence can be answered or a likelihood of its potential answer be determined and revisited later as more knowledge is accumulated. If ore is present in one section, the next is empty rock, then another ore section of similar dimensions beside more empty rock, a hypothesis starts to form. The ore possibly in vertical lenses, where a magmatic outbreak has hit a layer of strong rock and pushed the flow horizontally along the strata before it found more vertical cracking. With only one drill core that is impossible to determine conclusively, so a second is needed along a parallel path to the first. If they are a meter different in depth but the same regular sections of ore / rock / ore / rock are found, we can say it is now more likely the ore has formed in lenses by guessing the sections line up. While this is a good guess, it is far from conclusive and further still from investors agreeing.
For a better estimation of orebody consistency, a project feasibility report will be made by seasoned experts and a recommendation based on current economic, legal and engineering factors will be delivered. If those first few drill cores and rough estimation of mining method, cost and timeframe give a reasonable indication, further capital may be invested to refine a +/- 50% project feasibility study to a +/- 25% confidence initial design. As designs are proposed and issues resolved, the confirmed knowledge is expanded. Hundreds of drill cores could be needed to build a detailed data set of the orebody and the level of investment is always a gamble matched to the likelihood of success. To communicate established confidence and the need for further capital to investors, industry standard levels of confidence are based on measured ore grade and tonnage.
The core ore body of our mine design will be all but confirmed with high density drill sampling and formation of the first section of the central mine shaft or inroad, however nothing is rock solid until it is dug up. From this confirmed core of knowledge with a +/- 1% confidence design, the wider extents of the ore body are defined by the estimated volume and confidence of unknowns in +/- 5/10/25/50% gradings to indicate which direction development should progress in when the confirmed orebody starts to dry up.
This approach of approximation, estimation and iteration is what lead to public release of the H. Industries Mk.0 Interorbital Transport System. The second iteration math in the Mk.0 Design Calculator gave a viable freight mass and shipping time while the business case determined profit from the fifth year of swarm operation. These two results indicate that the most valid strategy to solving this complex issue is to launch a public business with high level design and hire professionals to validate everything. A large number of third layer problems still remain to be answered through research and calculation however if the first two layers of the conceptual model work, it is just a matter of time till those issues are solved too. #MarsShot
Thanks for reading,
Angel donations help to continue startup activities and research!
For any enquiries, please reach out to: firstname.lastname@example.org