Saturday, 7 April 2018

Modelo de estudo de comércio de engenharia de sistemas


Capítulo 4: Ferramentas de Engenharia do Sistema.
Por David Beale e Joseph Bonometti.
Este capítulo destina-se a fornecer exemplos suplementares de aplicação de ferramentas de engenharia de sistemas que podem ser necessárias durante o processo de design da SE por uma equipe de estudantes. Certamente, não é abrangente a amplitude e a sofisticação das ferramentas disponíveis na indústria. O que é apresentado aqui é algumas ferramentas essenciais, algumas básicas e simplificadas para fácil aplicação a um projeto de estudante.
É um bom momento para reforçar a ideia da arte da Engenharia de Sistemas em comparação com as "ferramentas" do artista. A prática, a compreensão, a manutenção e a adaptação de qualquer conjunto de ferramentas é necessária em todas as profissões, mas se as ferramentas se tornam objeto da arte, ou o objetivo final em si, falhou em seu mestre. No campo aeroespacial técnico e altamente complicado de hoje, o perigo é o uso excessivo e a dependência das ferramentas SE para produzir um produto bem sucedido aos olhos do cliente. Somente engenharia de som e bom julgamento triunfarão a esse respeito, enquanto a dependência excessiva do "processo" serve apenas para tornar um escravo do engenheiro de sistemas e obscurecer o impedimento crítico ao sucesso através de uma atenção excessivamente estrita para preencher documentos, fazer prazos, acrescentando orçamentos e apresenta dados insípidos. Portanto, aplique as ferramentas com a intenção de que elas servem para:
Auxílio em comunicações Prevenir erros "idiotas" Destaque os problemas ou problemas mais importantes Garantir a intemperidade Produzir o melhor produto possível para a situação dada.
Não trate as ferramentas como:
A coisa mais importante no projeto A substituição por boa engenharia ou julgamento de som O processo para garantir que todos os erros (particularmente design) sejam eliminados O que você recebe (ou graduado) para produzir.
Documento em forma de estrutura de tópicos.
Estrutura da divisão do produto (PBS)
Conceito de Operações.
Validar e Verificar.
Documento de Controle de Interface.
Orçamentos de massa, energia, custo e link.
Análise do Modo de Falha.
Armazene e documentos de linha de base.
Estrutura de repartição do trabalho (WBS), Gantt Chart, SEMP.
Tabela 1. Ferramentas de Engenharia de Sistemas. * Outras ferramentas são aplicáveis ​​ao Design Arquitetônico, mas não apresentadas neste Capítulo, como análise funcional ou decomposição funcional (ver exemplo no Capítulo 2), matrizes de decisão, Casa de Qualidade, além da simulação de software (ver exemplo no Capítulo 8), prototipagem e modelagem.
Estrutura da divisão do produto (PBS)
Uma estrutura de repartição do produto (PBS) é uma quebra hierárquica dos produtos de hardware e software do projeto. É criado como parte da função de design arquitetônico SE. O exemplo a seguir (Figura 1) vem de (NASA, 2007). Estes podem ser criados no PowerPoint usando o Insert & gt; Diagrama & gt; Organograma organizacional. Um pouco de discrição pode ser usado para combinar blocos (na maioria das vezes para projetos menores), ou adicionar especificidade para outros. O PBS comunica as áreas a serem trabalhadas e suporta tarefas, orçamentos e outras ferramentas de desenvolvimento SE. Para o projeto SOFIA (Figura 2), os dois principais subsistemas são o Sistema Observatório e o Sistema de Terra.
Figura 1. Estrutura de ruptura do produto da NASA (PBS) para um veículo de lançamento.
Figura 2. Exemplo de PBS para o telescópio infravermelho SOFIA.
Estrutura de repartição do trabalho (WBS)
A Figura 3 mostra uma estrutura de repartição de trabalho ou WBS (uma divisão hierárquica de hardware, software, serviços e dados) para o sistema de exemplo SOFIA da Figura 2. A WBS é uma árvore de esforços subdivididos para alcançar o objetivo final, deve incluir todo o trabalho funções. Funções de trabalho adicionais na WBS não encontradas em um PBS incluem gerenciamento de projetos, engenharia de sistemas, etc. (de (NASA, 1995)). A WBS também adiciona na preparação de estimativas de custos, requisitos de mão de obra, etc. Este documento, como todas as ferramentas SE, é adaptado ao projeto. Deve ser útil e aplicável ao trabalho e não apenas uma cópia do "pacote do último projeto" para incluir na próxima revisão ou briefing. Uma boa regra para usar com todas as ferramentas SE: se não tiver nenhuma função para suportar o projeto, não deve ser usado. No caso de pequenos projetos de estudantes, o objetivo de aprendizagem pode tornar a inclusão de ferramentas de processo e rastreamento de outra forma muito importantes. Além disso, os projetos de alunos passados ​​de uma classe ou grupo de alunos para outro tornam a documentação clara através do Gerenciamento de Configuração uma necessidade maior.
Figura 3. POEP de nível mais alto do projeto SOFIA, e os detalhes do WBS do Sistema Observatório.
A WBS inclui muitos dos mesmos elementos que o PBS, mas adiciona coisas como gerenciamento, segurança, confiabilidade ou outras atividades de supervisão significativas. Pode ser pensado como componentes derivados de "pessoas", enquanto o PBS é mais "coisas" que compõem o projeto. Mais uma vez, o objetivo é comunicar a complexidade e a natureza do projeto, estimular o processo de engenharia e design e prevenir omissões ou duplicações óbvias.
Este próximo WBS é um exemplo de como um WBS tradicional pode ser configurado para uma tarefa de engenharia de sistemas incomum ou avançada. O sistema Momentum Exchange Electrodynamic Reboost (MXER) é um cabo de espaço de 100 km que pode pegar um satélite e jogá-lo em uma trajetória para GEO, a lua ou espaço interplanetário. É reiniciado pressionando o campo magnético da Terra e pode estar pronto para conferir impulso ao próximo satélite lançado para LEO. Observe como seus componentes e necessidades únicos são capturados no WBS. O "Código do Propagador" é um algoritmo de computador separado que prevê onde o limite de amarração será no encontro. Como este era um fator chave para o projeto, controle e operações, era uma área de nível superior a ser trabalhada. Poderia ter sido incluído como um item de nível inferior em uma área de controle de aviónica ou espaçonave tradicional, mas neste projeto de desenvolvimento de tecnologia foi melhor servido para esgotar o topo. Como se sabe quando fazer isso? Essa é a arte da Engenharia de Sistemas! Experiência, intuição de engenharia e consulta com membros da equipe de design são as formas típicas de fazer esse julgamento. Nunca deve ser baseado na documentação do último projeto porque é fácil "cortar e colar" e passar para a próxima ferramenta SE.
Figura 4. Estrutura de fragmentação do trabalho de amarração MXER (Momentum Exchange Electrodynamic Reboost).
Um WBS não precisa ser um gráfico hierárquico. Veja o exemplo PEP no Capítulo 2 que é baseado em um Gráfico de Gantt e uma estrutura de estrutura de tópicos da WBS usando o MS-Project.
Estudos de comércio.
Um estudo de comércio é uma ferramenta usada para ajudar a escolher uma solução para um problema ou associar a área de desenvolvimento para um projeto específico. Eles são freqüentemente usados ​​no início do ciclo de desenvolvimento do produto e, se usados ​​adequadamente, podem ser uma das tarefas de engenharia mais importantes no ciclo de vida de um produto. Em projetos altamente sofisticados, ou o design de "sistemas de sistemas", o estudo de comércio é muitas vezes muito complexo, com detalhes de engenharia que se aproximam do processo de projeto atual em algumas áreas. Estes estudos também são extremamente importantes na definição e iluminação de quais fatores são realmente os mais influentes para o sucesso do trabalho. Os resultados mais surpreendentes devem se originar nos estudos de comércio iniciais (Fase A ou B), se não, as surpresas virão no final de um projeto onde o tempo e o dinheiro necessários para fazer uma mudança são excessivos.
Se um problema tiver soluções múltiplas, um estudo de comércio classificará as soluções, fornecendo a cada solução um valor numérico. Um método simples é determinar um valor numérico para cada opção. Isso geralmente é feito com base em fatores de peso e uma escala de normalização para os critérios de avaliação. Os critérios de avaliação são fatores importantes que queremos incluir no estudo de comércio. Os fatores de peso são usados ​​para determinar a importância de os critérios de avaliação serem relativos entre si. A escala de normalização cria uma escala de intervalo constante que nos permite definir um valor numérico para cada um dos critérios de avaliação.
O custo, a massa, o volume, o consumo de energia, o legado e a facilidade de uso são alguns critérios básicos de avaliação (nota, dependendo do projeto, podem ser desejáveis ​​critérios adicionais ou eliminar um listado aqui, possivelmente, pode ser apropriado). Também é muito importante entender que a escolha de fatores de peso e escala de normalização são extremamente importantes para esse processo. Deve ter um grande cuidado ao estabelecer esses valores, porque o resultado pode ser altamente sensível ao viés intencional ou não intencional. Por exemplo, se um lista o custo do seguro ao comprar um carro juntamente com o preço de compra inicial como critério de avaliação, mas acrescenta um fator de pesagem ao seguro muito maior que o preço de compra, os resultados de classificação das diferentes opções de carros são marcadamente diferentes. Quando o seguro é muito menor do que o preço de compra e varia apenas ligeiramente entre os vários carros, os resultados não renderão o melhor valor se o que você realmente deseja é baixo custo geral do ciclo de vida ou o menor custo por milha.
Em alguns espaços comerciais muito próximos, pode-se empregar o desejo de distinguir entre várias opções e uma escala de pesagem de nível (ou seja, uma escala de ponderação de 1, 3 e 9, por exemplo). Isso intencionalmente infla as pequenas distinções entre as opções de modo que um "vencedor" claro pode ser determinado. Esses esquemas devem ser utilizados com extrema atenção e somente quando as opções são verdadeiramente impossíveis de se diferenciar.
Etapas do Estudo de Comércio Simplificado.
1. Defina o problema.
2. Defina restrições nas soluções.
3. Encontre 3-5 soluções.
4. Defina critérios de avaliação.
5. Defina fatores de peso.
6. Defina a escala de normalização.
7. Preencha estudo de comércio (por exemplo, formato de planilha eletrônica)
8. Classifique as soluções.
Exemplo de estudo de comércio - Comprar um carro (de J-M Wersinger e Thor Wilson)
Com base nas etapas listadas acima:
1. Quero encontrar qual é a melhor opção para um carro novo.
2. O carro deve ser inferior a US $ 50.000 e deve ser local.
3. Um cívico negro com 37 mil milhas e custa US $ 4,000 em condições precárias. Um BMW vermelho com 57.000 milhas e custa US $ 17.000 em excelentes condições. Um Passat branco com 6.000 milhas e custa US $ 7.000 em boas condições.
4. Custo, quilometragem, condição do carro e cor do carro.
5. Atribua um fator de peso de 3 para o custo e condição do carro, 2 para quilometragem e 1 para a cor do carro.
6. Escalas de normalização:
Condição do carro.
Cor do carro.
Tabela 2. Exemplo de estudo de comércio.
Exemplo de estudo de comércio - Comparação de microcontroladores para CubeSat (AUSSP, 2007)
Tabela 3. Exemplo de estudo de comércio, microcontrolador versus FPGA.
Este é um estudo de comércio realizado pela equipe do subsistema C & amp; DH para avaliar os pontos fortes e fracos relativos dos microcontroladores padrão de 8 bits e os FPGAs Antifuse (programáveis ​​de uma vez) (Matrizes de portas programáveis ​​de campo) em aplicativos CubeSat. Embora os microcontroladores tenham sido amplamente utilizados nas aplicações CubeSat, os FPGAs são uma tecnologia relativamente nova que pode fornecer uma solução para os tumores de eventos únicos induzidos por radiação (SEUs), comuns em uma órbita terrestre baixa e síncrona do sol. Os critérios de avaliação foram escolhidos de tal forma que o comércio não compararia uma peça específica de hardware de cada categoria, mas sim os traços essenciais de qualquer peça de hardware de cada categoria.
Resultado: A tolerância à radiação inerente de qualquer peça de hardware implementada em um CubeSat é uma característica de importância crítica. Enquanto os FPGA da Antifuse pareciam ser uma solução para SEUs em satélites pequenos, eles não classificavam mais do que os microcontroladores mais tradicionais no estudo de comércio. Quando o desempenho, o custo, o tempo e o alcance da implementação de cada sistema são comparados, os FPGAs parecem assumir apenas a categoria de desempenho, com os microcontroladores varrendo os outros. Mesmo com tolerância à radiação em 30% da pontuação total ponderada (um exemplo de polarização da análise para verificar a robustez do resultado), os FPGAs surpreendentemente rastreiam microcontroladores. Sem dúvida, esse tipo de estudo de comércio terá resultados muito diferentes para missões com mais recursos, mais tempo e engenheiros mais experientes. No entanto, parece que a rota correta é implementar um microcontrolador padrão, juntamente com algoritmos de detecção e correção lógicos da SEU.
Esta foi uma comparação do subsistema que não considerava as implicações da engenharia de sistemas para outros subsistemas, o que teria solidificado ainda mais o controlador programável padrão. Com apenas um conjunto de instruções capazes de ser implementadas, a qualidade e a confiabilidade de todos os subsistemas controlados serão aumentadas. Esses outros subsistemas teriam que bloquear as decisões de tecnologia anteriormente e ter menos capacidade de corrigir problemas através de mudanças de software. Do ponto de vista de engenharia de sistemas (ou gerenciamento de projetos), um maior número de SEUs, massa adicional em blindagem ou mais esforço no layout do componente (aproveita o efeito de proteção de tanques, estruturas, etc.) são todos compromissos aceitáveis ​​para um inflexível mas endurece radiação o subsistema C & amp; DH.
Exemplos de documentos de controle de interface.
Os Documentos de Controle de Interface (ICD) são inerentemente simples no fato de gravarem todos os locais onde é fácil (e a experiência passada provou) não comunicar as especificações técnicas e os requisitos entre as equipes do subsistema. Os orifícios de parafusos desalinhados ou os pinos de conector ausentes são dispendiosos para serem consertados quando a montagem final está sendo feita e o projeto está atrasado algumas semanas no encontro de uma data de lançamento fixa. O documento é outro lugar para verificar as tensões, as taxas de fluxo de fluido, as cargas térmicas, etc., para garantir a conformidade com os padrões e requisitos de nível superior. O ICD também pode ajudar a explicar a "massa faltante" nunca planejada na forma de fiação, suportes e hardware miscelátil necessário em uma montagem de trabalho real, uma vez que raramente são todos os componentes do sistema conectados de ponta a ponta.
O ICD não é apenas documentação, mas também hardware projetado para verificar o ajuste e testar os componentes reais ao longo do desenvolvimento. Isto é especialmente útil em grandes projetos com componentes e subsistemas em desenvolvimento em todo o mundo. Em vez de colocar fisicamente peças, um modelo de hardware de interface é construído e testado separadamente em cada parte, sem afetar a programação de um grupo ou o processo de desenvolvimento. Os ICDs também se aplicam aos equipamentos terrestres e às comunicações sem fio. Em qualquer lugar, a necessidade de documentar funções separadas que, em última instância, se interliga, um CIM pode ser desenvolvido. Uma grande variedade de formulários de documentos são mostrados nos seguintes exemplos. Tabelas ou formatos de planilha são comuns, mas os ICDs podem incluir documentos técnicos, imagens, hardware, gráficos e referências de especificações.
O engenheiro de sistemas é responsável por manter o ICD atual e preciso. As derivações do subsistema, muitas vezes, aplicam cegamente o que receberam pela última vez como as especificações para as quais estão projetando e construindo. Portanto, o engenheiro do sistema deve garantir que cada equipe obtenha cada atualização de alteração e que os conflitos sejam resolvidos no início da fase de projeto. Há também a necessidade de garantir que a sala de layout, a massa, o contato térmico, a rigidez física e os parâmetros elétricos / de dados sejam satisfatórios para todos os subsistemas, bem como os objetivos da missão.
NOME ANALOGO DE SINAL (INTERMODULADO)
Fornecimento regulamentado de 3,3 V.
CDH MCU ADC (PF0)
Suprimento regulado de 5,0 V.
CDH MCU ADC (PF1)
Potência TNC, potência XCVR.
CDH MCU ADC (PF2)
nada usa diretamente.
via I2C a partir do ADC1 no EPS.
via I2C a partir do ADC1 no EPS.
saída de células solares.
via I2C a partir do ADC1 no EPS.
toda a matriz.
via I2C a partir do ADC1 no EPS.
Nada disso é diretamente gerado.
Correntes de EPS (enviadas como tensão)
Bat1 carga atual.
via I2C a partir do ADC3 no EPS.
1 corrente de descarga.
via I2C a partir do ADC3 no EPS.
Bat2 carga atual.
via I2C a partir do ADC3 no EPS.
corrente de descarga de bat2.
via I2C a partir do ADC3 no EPS.
corrente de saída da matriz de células solares.
via I2C da ADC1on EPS.
Desenho de corrente de barramento 5.0V.
via I2C a partir do ADC1 no EPS.
Desenho de corrente de barramento de 3,3 V.
via I2C a partir do ADC2 no EPS.
Draw de corrente de barramento de 3.7V.
via I2C a partir do ADC2 no EPS.
célula solar 1 atual.
via I2C a partir do ADC2 no EPS.
para determinar a atitude.
célula solar 2 atual.
via I2C a partir do ADC2 no EPS.
célula solar 3 atual.
via I2C a partir do ADC2 no EPS.
célula solar 4 atual.
via I2C a partir do ADC2 no EPS.
célula solar 5 atual.
via I2C a partir do ADC2 no EPS.
célula solar 6 atual.
via I2C a partir do ADC2 no EPS.
Tensão de alimentação do transceptor.
da ADC no painel COM.
determine se o relé Transcvr funciona.
Todos precisam da mesma tensão de referência.
termistor para ADC0 em CDH.
pode ter que se livrar de alguns.
termistor para ADC0 em CDH.
termistor para ADC0 em CDH.
termistor para ADC0 em CDH.
termistor para ADC0 em CDH.
termistor para ADC0 em CDH.
termistor para ADC0 em CDH.
Thermisor para ADC0 em CDH.
termistor para MCU (PF7)
2ª temperatura do rcvr.
termistor para MCU (PF4)
termistor para MCU (PF5)
termistor para ADC on (PF6)
Tabela 4. Exemplo de documento de controle de interface.
Tabela 5. Interface MCU / TNC.
Tabela 6. Exemplo de documento de controle de interface.
Exemplos de orçamento de massa.
Para aplicações aeroespaciais, a planilha de orçamento em massa é o ícone da caixa de ferramentas do engenheiro do sistema. Porque minimizar a massa é tão crítico no projeto de aeronaves, foguete e espaçonave, o número de linha de fundo aqui é muitas vezes o resultado mais monitorado e angustiante no SE. Embora de grande importância, o orçamento em massa é simples de configurar e usar. Infelizmente, muitas vezes é mal estimado e certamente a ferramenta SE mais abusada. A massa é crítica, portanto, mesmo pequenos erros podem levar a grandes problemas no setor aeroespacial. Uma margem de 30% é geralmente adicionada para estimativas de massa preliminares com mais de 50% adicionados a sistemas ou subsistemas que são novos ou têm tecnologia única. Mesmo uma nave espacial que é uma duplicação de uma missão anterior, terá uma margem de 10%, uma vez que cada missão tem ambientes únicos e quase todas as modificações de requisitos se traduzirem em uma penalidade em massa. Algumas coisas são melhor não sobrecarregadas com uma margem de massa plana, como propulsores. Tais fluidos tendem a ser pesados ​​e normalmente bem conhecidos a partir da análise da trajetória e outros fatores. No caso de um escudo de radiação pesada, que é significativamente maciço, mas essencialmente "massa burra", pode ser aplicada uma margem de 2% ou menos.
O engenheiro de sistemas não deve ser o funcionário ou a secretária que insira o valor que os produtores de subsistemas produzem e, em seguida, anunciar o grande total para a equipe. Existe um julgamento na produção de uma massa bruta inicial, atribuindo uma tolerância máxima de massa para cada subsistema e fornecendo orientação sobre contingência nos níveis de peça, componente ou sistema. O total deve atender aos outros critérios de nível de missão, como o limite do veículo de lançamento, a carga útil planejada ou um alvo de custo com base na massa. Determinando se é necessária tecnologia mais avançada, ou a reatribuição da estimativa em massa leva conhecimento técnico. O engenheiro de sistemas deve servir como "intermediário honesto" para direcionar as melhores decisões gerais para a equipe e para enfrentar o inevitável conflito entre subsistemas.
Tabela 7. Exemplo de orçamento de massa de C e DH.
Exemplos de orçamento de energia.
Os orçamentos de energia são muitas vezes mantidos no nível do subsistema por conveniência, mas esta é realmente uma função de engenheiro de sistemas e se preocupa tanto quanto o orçamento em massa. Existem trades realizados entre subsistemas, seleções de tecnologia e integração de operações de missão. Existem dois orçamentos de energia principais; a potência total (tudo em cima) e os limites operacionais (potência máxima permitida). Muitas vezes, o orçamento de energia restringirá certas funções e limitará os modos operacionais. A transmissão de dados geralmente é uma tarefa intensiva em energia e é limitada a determinados períodos da missão quando outros subsistemas estão desligados ou em modo ocioso. Os satélites em órbita terrestre terão um período de escuridão onde os painéis solares já não fornecem energia e as baterias devem ser reservadas para o aquecimento de componentes cruciais da espaçonave. Tais decisões de negociação e operações estão em um nível SE, mesmo quando o orçamento de energia é mantido por um único líder de subsistema.
No exemplo de orçamento C & amp; DH abaixo, vários modos de energia ou estados estão listados. Cada um tem seu próprio limite máximo ou inferior permitido e cada um é contabilizado separadamente. Esses estados correspondem aos modos de missão e à linha de tempo.
Tabela 8. Exemplo de orçamento de energia C & amp; DH.
Exemplos de orçamento de dados e links.
Como o orçamento de energia, os orçamentos de dados e links geralmente são mantidos por uma liderança do subsistema, com a responsabilidade da coordenação geral ainda deixada no nível do engenheiro de sistemas. O orçamento de dados é derivado do tempo disponível para as comunicações e a taxa de baud. Geralmente há duas taxas e muitas vezes dois sistemas separados; um para o uplink ou comunicações para o sistema, e outro para o downlink de dados que estão sendo enviados para a estação terrestre. Há muitos fatores a serem considerados e o julgamento de engenharia é necessário ao fazer essas estimativas. A qualidade dos dados, a resolução, a verificação de erros, o tamanho do pacote e critérios semelhantes são parâmetros típicos rastreados no orçamento de dados e links.
Tabela 9. Exemplo de orçamento de dados C & amp; DH.
Tabela 10. Exemplo de orçamento de link.
Análise do Modo de Falha.
A análise do modo de falha é uma ferramenta para gerenciamento de riscos. É a mais rigorosa engenharia ou aspecto técnico da gestão de riscos. Um modo de falha é a maneira pela qual algo falha. Cada falha tem uma ou mais consequências, que são chamadas de efeitos de falha. Uma causa de falha é o que induz a falha. Após a identificação de todas as possíveis falhas, os efeitos da falha são estimados. Em seguida, um plano de mitigação é preparado para cada causa potencial de falha.
Análise de Modos de Falha no CubeSat.
Como o projeto cubesat não possui os fundos ou o espaço disponível para mitigar cada falha potencial, o sistema deve ser redundante, tolerante a falhas e capaz de corrigir erros detectados. A análise dos modos de falha é feita para determinar quais as falhas potenciais em um projeto e como mitigar. Esta análise determina a relação entre a falha de um único componente e seus efeitos no sistema como um todo. Aceitamos que nenhum sistema é perfeito e, portanto, alguns riscos são aceitáveis. Portanto, as tentativas de mitigação estarão focadas nessas falhas, o que pode causar falha na missão. Na maioria dos casos, um componente secundário pode ser usado para mitigar qualquer falha na missão. A tabela imediatamente seguinte mostra as quatro formas diferentes em que uma falha de componente pode afetar todo o sistema.
Tabela 11. Análise do Modo de Falha, Códigos de Gravidade de uma Falha Potencial.
Identificou falhas de ponto único.
Uma falha de ponto único ocorre se a missão falhar como resultado de um único componente na falha no satélite. Simplificando, é hardware que o satélite não pode operar sem. É extremamente importante detectar e eliminar todas as falhas de ponto único que possam surgir no AS-I. Em um esforço para eliminar falhas de ponto único, uma filosofia de redundância foi adotada. A filosofia de redundância afirma que todo o hardware de satélite identificado como componentes de falha de ponto único deve ter hardware redundante (secundário). Geralmente ter hardware redundante significa simplesmente ter dois do mesmo componente onde eles operam de forma independente. Uma variação comum a este tema de "se ele falhar, usar o outro", é a reposição quente onde o segundo componente está ativo e está funcionando e continuamente pronto para operar a qualquer momento. Às vezes, as unidades são regularmente trocadas como a unidade primária por outros motivos, como para suportar a vida útil da bateria ou evitar a captura conjunta em mecanismos.
Outras filosofias de redundância incluem ter uma peça de hardware totalmente diferente como o backup. Esta pode ser uma antena de baixo ganho que atende as necessidades de limpeza e uma antena de alto ganho para a ciência da missão. No entanto, se um falhar, o outro pode executar o mesmo dever, mas com um rendimento de dados total diminuído. Tal como acontece com a maioria dos esquemas de redundância relacionada a aeroespacial, isso proporciona uma economia de massa significativa ao ter dois dos dois tipos de sistemas de comunicação a bordo. Outro método de economia de massa é identificar o ponto fraco real no subsistema e fazer a redundância no componente ou no nível da parte. Um bom exemplo é nas válvulas de foguete, onde a caixa da válvula é a parte mais pesada e menos provável para falhar. No passado, a soldagem em uma segunda válvula completa em linha ou em paralelo era a solução. No entanto, uma vez que a falha é quase sempre no assento da válvula (por exemplo, não fechando apertado), a redundância pode ser conseguida na duplicação dos selos do assento e nas bobinas de atuação de eletroímãs duplas. Isso não só economiza a massa de uma caixa de válvula, mas também o custo e o risco de mais duas soldas no sistema. A redundância também pode estar em componentes de desvalorização para garantir que eles duraram a vida útil da missão ou adicionar margem para um subsistema, então, se a parte redundante da unidade falhar, ele pode "acelerar" para um maior desempenho e diminuir ou eliminar a perda. Uma caixa ou transformador de conversão de energia da nave espacial pode funcionar como duas unidades de baixa potência que fornecem a alta tensão do instrumento primário, ganhando assim uma longa vida com menor risco, mas cada uma com a capacidade de atender a toda a exigência de tensão (ou o mínimo para o sucesso da missão) deve o outro componente falha. Os detalhes sobre as opções de redundância AS-I para as falhas de ponto único são fornecidos em cada documentação do subsistema. Os seguintes componentes foram identificados como componentes de falha de ponto único no CubeSat:
Armazenamento de energia (baterias de iões de lítio)
Exemplo de Análise do Modo de Falha - Sistema COMM, C e Sistema DH e Stuctures.
Tabela 12. Análise do modo de falha para o sistema Comm.
Tabela 13. Análise do modo de falha para o sistema C e DH.
Tabela 14. Análise do modo de falha para o sistema Structures.
Gráfico de Gantt.
Um Gráfico de Gantt é um gráfico de barras que pode ser usado para atribuir tempo a tarefas, agendamento de avaliações e marcos de data. As tarefas são as atividades do projeto. As tarefas têm datas de início e término (por exemplo, "Criar Estrutura"). Geralmente, nós escrevemos a tarefa como uma frase que começa com um verbo (por exemplo, "criando o produto"). Cada tarefa tem uma hora de início e hora final. O gráfico geralmente precisa ser atualizado uma vez que as datas finais geralmente são estimativas e tendem a deslizar (quase sempre mais tarde do que anteriormente na prática real). Os marcos são os pontos de verificação, as datas de vencimento, as datas para os objetivos interinos ou as datas das revisões. Uma vez que o período do gráfico geralmente é definido em dias, semanas ou meses, reuniões, revisões e marcos aparecem como pontos únicos ou marcadores em vez de uma linha para o tempo que o item leva. Consulte o Capítulo 2 para outros exemplos.
O Gráfico de Gantt é tipicamente uma dessas "ferramentas" do SE que podem tornar o escravo do seu mestre em prática real. Se você soubesse exatamente quais as tarefas que precisavam, precisamente, quanto tempo cada tarefa realmente tomava e a ordem apropriada seria feita, então o Gráfico de Gantt poderia ser usado corretamente como um verdadeiro indicador do progresso da engenharia. Se o progresso real fosse antes do Gráfico de Gantt, a equipe poderia ser considerada excepcional e ter tempo livre. Se eles estivessem atrasados, eles precisavam ficar atrasados ​​e voltar à pista. Mas isso tudo pressupõe que o gráfico indique o verdadeiro processo. Somente em circunstâncias raras, há informações suficientes para obter até um cronograma ligeiramente preciso estabelecido no início de um projeto técnico. Além de não conhecer o design e o que pode ser necessário para integrar todos os subsistemas, as coisas fora do controle do projeto são garantidas para estragar o planejamento. Típico para projetos aeroespaciais é a disponibilidade de financiamento (o dinheiro deve estar para contratar engenheiros ou pagar o contratado), o processo de aquisição (quanto tempo leva para comprar algo), o acesso à instalação (o teste é muitas vezes reprogramado devido a outros projetos usando uma instalação) e inúmeras questões técnicas. Este "bastão de medição" do progresso do projeto não é um padrão preciso que pode ser usado para vencer o subsistema leva a fazer seu trabalho! É descrito com mais precisão como um cabo de bungee livremente desenhado para o qual pessoas diferentes pegam em vários pontos, cada um produzindo o comprimento eo alinhamento das marcas de medição bastante incorretamente espaçadas. Use o gráfico como uma ferramenta para ajudar a monitorar e avaliar o local onde um projeto se encontra. É melhor do que não ter qualquer idéia do tempo da tarefa e faz o engenheiro do sistema estar ciente dos conflitos e do fluxo de integração do produto. No entanto, você poderia ter a equipe de engenharia mais excepcional que está muito por trás do cronograma original com tarefas adicionadas e marcos perdidos e o maior grupo de desvios diretamente no cronograma da Gantt Chart. Você provavelmente ainda preferiria montar no foguete o primeiro grupo produzido atrasado, do que o lançamento a tempo com o segundo.
Figura 5. Gráfico de Gantt que mostra o cronograma das tarefas e seu progresso.
Esboço de um Plano detalhado de gerenciamento de engenharia de sistemas (SEMP) para um projeto estudantil (NASA, 2002)
O SEMP é um documento de planejamento que deve ser avaliado pelo Engenheiro de Sistemas no final da Fase A, e atualizado formalmente conforme necessário depois disso. Ele planeja principalmente atividades e avaliações, e atribui funções de SE a indivíduos. O nível de detalhe necessário aqui depende do tamanho da equipe, do escopo do projeto, etc., com as seções principais listadas da seguinte forma:
(Incluir visão geral da missão, cronograma do projeto com ciclo de vida e revisões).
2. Atividades de Engenharia do Sistema.
(Descreva o ciclo de vida geral, incluindo as principais atividades de engenharia de sistemas para cada fase, independentemente de quem os faz. Descreva decisões e atividades críticas, como comentários.)
(Descreva métodos utilizados para atividades de engenharia de sistemas de comunicação, progresso, status e resultados).
4. Funções de Engenharia de Sistemas.
4.1 Objetivos da Missão.
(O Engenheiro de Sistemas deve ser responsável por criar uma equipe responsável pelos Requisitos de Nível 1 e Objetivos da Missão).
4.2 Desenvolvimento de Conceitos de Operações.
(O Engenheiro de Sistemas define quem desenvolve o conceito de operações, qual o formato planejado e quando é devido. Defina quem desenvolve o conceito de verificação baseado no solo, qual formato está planejado e quando é devido).
4.3 Desenvolvimento de Arquitetura e Design de Missão.
(Define quem desenvolve a Arquitetura e Design, o formato planejado e quando é devido. Defina quem desenvolve e mantém a Estrutura de Distribuição do Produto.)
4.4 Identificação e Análise de Requisitos.
(Defina quem desenvolve a hierarquia de requisitos, define quem é responsável por cada parte da hierarquia, define quem identifica e é responsável pelos requisitos de crosscutting. Defina quando a identificação dos requisitos é devida e quando o controle de configuração formal deverá começar.)
4.5 Validação e Verificação.
(Defina quem é responsável pelas atividades de validação e verificação e como isso é realizado).
4.6 Interfaces e ICDs.
(Defina quais ICDs são planejados, quais interfaces devem ser incluídas, quem é responsável pelo desenvolvimento dos ICDs e quem possui autoridade de gerenciamento de aprovação e aprovação).
4.7 Ambientes da Missão.
(Defina os ambientes de missão aplicáveis, quem é responsável por determinar os níveis ou limites ambientais específicos da missão e como cada requisito ambiental deve ser documentado).
4.8 Orçamentos de recursos e alocação de erros.
(Liste os orçamentos de recursos que a Engenharia de Sistemas rastreará, e quando eles serão colocados sob gerenciamento de configuração formal).
4.9 Gerenciamento de Riscos.
(Define who is responsible for defining acceptable risk and where this is documented. Define the role of systems engineering in risk management and how the analysis are to be accomplished.)
4.10 System Engineering Reviews.
(Define which system engineering reviews are planned and who is responsible for organizing them.)
5. Configuration Management.
(Define what systems engineering documentation is required and when it is to be placed under formal configuration management. Define the method to archive and distribute System Engineering information generated during the course of the lifecycle.)
6. System Engineering Management.
(Define the Systems Engineering Organization Chart and Job Responsibilities. Define trade studies, who does them and when they are due.)
AUSSP. (2007). AubieSat-1: Auburn's First Student-Built Satellite .
NASA. (1995). NASA Systems Engineering Handbook, SP-610S : PPMI.

Chapter 4: System Engineering Tools.
By David Beale and Joseph Bonometti.
This chapter is meant to provide supplemental examples of application of systems engineering tools which may be needed during the SE design process by a student team. It is certainly not all inclusive of the breadth and sophistication of the available tools in the industry. What is presented here are some core tools, some basic and simplified for easy application to a student project.
It is a good time to reinforce the idea of the art of Systems Engineering as compared to the artist’s “tools”. Practice, understanding, maintenance, and tailoring of any tool set is necessary in all professions, but if the tools become the object of the art, or the end goal in itself, it has failed its master. In today’s technical and highly complicated aerospace field, the danger is the overuse and reliance upon the SE tools to produce a successful product in the eyes of the customer. Only sound engineering and good judgment will triumph in that regard, while over reliance of “the process” only serves to make a slave of the systems engineer and obscure the critical impediment to success through overly strict attention toward filling out documents, making deadlines, adding up budgets and presenting insipid data. Therefore apply the tools with the intent that they serve to:
Aid in communications Prevent “dumb” errors Highlight the most important problems or issues Ensure timelessness Produce the best possible product for the given situation.
Do not treat the tools as:
The most important thing on the project The substitution for good engineering or sound judgment The process to ensure all (particularly design) errors are eliminated What you get paid (or graded) to produce.
Document in outline form.
Product Breakdown Structure (PBS),
Concept of Operations.
Validate and Verify.
Interface Control Document.
Mass, Power, Cost, Link budgets.
Failure Mode Analysis.
Store and baseline documents.
Work Breakdown Structure (WBS), Gantt Chart, SEMP.
Table 1. Systems Engineering Tools. *Other tools are applicable for Architectural Design but not presented in this Chapter, such as functional analysis or functional decomposition (see example in Chapter 2), decision matrices, House of Quality, in addition to software simulation (see example in Chapter 8), prototyping and modeling.
Product Breakdown Structure (PBS)
A product breakdown structure (PBS) is a hierarchical breakdown of the hardware and software products of the project. It is created as part of the SE architectural design function. The following example (Figure 1) comes from (NASA, 2007). These can be created in PowerPoint using the Insert > Diagram > Organizational Chart. Some discretion can be used to combine blocks (most often for smaller projects), or add specificity to others. The PBS communicates the areas to be worked and supports task assignments, budgeting and other SE tool development. For the SOFIA project (Figure 2), the two major subsystems are the Observatory System and the Ground System.
Figure 1. NASA Product Breakdown Structure (PBS) for a launch vehicle.
Figure 2. PBS example for the SOFIA infrared telescope.
Work Breakdown Structure (WBS)
Figure 3 show a Work Breakdown Structure or WBS (a hierarchical division of hardware, software, services and data) for the SOFIA example system of Figure 2. The WBS is a tree of subdivided efforts to achieve the end objective, it should include all work functions. Additional work functions on the WBS not found in a PBS include Project Management, Systems Engineering, etc. (from (NASA, 1995)). The WBS also adds in preparing cost estimates, manpower requirements, etc. This document, like all SE tools, is tailored to the suit the project. It should be useful and applicable to the job and not simply a copy of “the last project’s package” to include in the next review or briefing. A good rule to use with all SE tools: if it has no function to support the project, it should not be used. In the case of small student projects, the learning objective may make the inclusion of otherwise questionable process tools and tracking very important. Furthermore, student projects that are passed from one class or group of students to another make clear documentation through Configuration Management a greater necessity.
Figure 3. SOFIA Project highest-level WBS, and the details of the Observatory System WBS.
The WBS includes many of the same elements as the PBS but adds such things as the management, safety, reliability or other significant oversight activities. It may be thought of as “people” derived components whereas the PBS is more the “things” that comprise the project. Again, the purpose is to communicate the complexity and nature of the project, to stimulate the design and engineering process and to prevent obvious omissions or duplications.
This next WBS is an example of how a traditional WBS can be set up for an unusual or advanced systems engineering task. The Momentum Exchange Electrodynamic Reboost (MXER) system is a 100 km space tether that can pick up a satellite and toss it on a trajectory to GEO, the moon, or interplanetary space. It is reboosted by pushing on the Earth's magnetic field and can be ready to impart momentum to the next satellite launched to LEO. Note how its unique components and needs are captured in the WBS. The “Propagator Code” is a separate computer algorithm that predicts where the tether end will be at rendezvous. Because this was a key factor to the design, control and operations it was a top-level area to be worked. It could have been included as a lower tier item in a traditional avionics or spacecraft control area, but in this technology development project it was best served in breaking it out at the top. How does one know when to do that? That is the art of Systems Engineering! Experience, engineering intuition and consultation with design team members are the typical ways one makes that judgment. It should never be based on the last project’s documentation because it is easy to “cut and paste” and move on to the next SE tool.
Figure 4. MXER (Momentum Exchange Electrodynamic Reboost) Tether Work Breakdown Structure.
A WBS does not have to be a hierarchical chart. See the WBS example in Chapter 2 which is based on a Gantt Chart and a WBS outline structure using MS-Project.
Trade Studies.
A trade study is a tool used to help choose a solution to a problem or to bound the development area for a particular project. They are often used early on in the product development cycle and, if used properly, can be one of the most important engineering tasks in the life cycle of a product. In highly sophisticated projects, or the design of “systems-of-systems,” the trade study is often very complex, with engineering details approaching the actual design process in some areas. These studies are also extremely important in defining and illuminating what factors are truly the most influential to the success of the work. The most surprising outcomes should originate in these early (Phase A or B) trade studies, if not, the surprises will come at the end of a project where the time and money required to make a change is excessive.
If a problem has multiple solutions, a trade study will rank the solutions by giving each solution a numerical value. A simple method is to determine a numerical value for each option. This is often done based on weight factors and a normalization scale for the evaluation criteria. Evaluation criteria are important factors that we want to include in the trade study. Weight factors are used to dictate how important the evaluation criteria are relative to each other. The normalization scale creates a constant interval scale that allows us to set a numerical value for each of the evaluation criteria.
Cost, mass, volume, power consumption, legacy, and ease of use are some basic evaluation criteria (note, depending on the project an additional criteria may be desirable to add, or eliminating one listed here conceivably could be appropriate). It is also very important to understand that the choice of weight factors and normalization scale are extremely important to this process. A great deal of care must be taken when setting these values, because the result can be highly sensitive to the intentional or unintentional bias. For example if one lists cost of insurance when buying a car along with initial purchase price as evaluation criteria, but adds a weighing factor to the insurance much higher than the purchase price, the ranking results of the different car options are markedly different. When the insurance is much less than the purchase price and varies only slightly among the various cars, the results will not yield the best value if what you truly desire is low overall lifecycle cost or the lowest cost per mile.
In some very close trade spaces, the desire maybe to distinguish among various options and a tier weighing scale can be employed (i. e., a weighting scale of 1, 3 and 9 for example). This intentionally inflates the small distinctions among options so that a clear “winner” can be determined. Such schemes should be used extremely carefully and only when options are truly impossible to differentiate.
Simplified Trade Study Steps.
1. Define the problem.
2. Define constraints on the solutions.
3. Find 3-5 solutions.
4. Define evaluation criteria.
5. Define weight factors.
6. Define normalization scale.
7. Fill out trade study (e. g., spreadsheet format)
8. Rank the solutions.
Trade Study Example – Purchasing a Car (from J-M Wersinger and Thor Wilson)
Based on the steps listed above:
1. I want to find what is the best option for a new car.
2. The car must be less than $50,000 and it must be local.
3. A black Civic with 37,000 miles and cost $4,000 in poor condition. A red BMW with 57,000 miles and cost $17,000 in excellent condition. A white Passat with 6,000 miles and cost $7,000 in good condition.
4. Cost, mileage, condition of the car, and color of the car.
5. Assign a weight factor of 3 for cost and condition of the car, 2 for mileage, and 1 for color of the car.
6. Normalization scales:
Condition of the car.
Color of the car.
Table 2. Trade study example.
Trade Study Example – Comparison of Microcontrollers for CubeSat (AUSSP, 2007)
Table 3. Trade study example, microcontroller versus FPGA.
This is a trade study conducted by the C&DH subsystem team to evaluate the relative strengths and weaknesses of standard 8 bit microcontrollers and Antifuse (one-time programmable ) FPGAs (Field Programmable Gate Arrays) in CubeSat applications. While microcontrollers have been used extensively in CubeSat applications, FPGAs are a relatively new technology that may provide a solution to radiation induced single event upsets (SEUs), common in a sun synchronous low earth orbit. Evaluating criteria were chosen such that the trade would compare not a specific piece of hardware from each category, but rather the essential traits of any piece of hardware from each category.
Result: The inherent radiation tolerance of any piece of hardware implemented on a CubeSat is a feature of critical importance. While Antifuse FPGAs appeared to be a solution for SEUs in small satellites, they did not score higher than the more traditional microcontrollers in the trade study. When performance, cost, time, and the scope of implementing each system are compared, FPGAs seem to take only the performance category, with microcontrollers sweeping the others. Even with radiation tolerance at 30% of the total weighted score (an example of biasing the analysis to check for robustness of the result), FPGAs surprisingly trail microcontrollers. Undoubtedly, this type of trade study will have very different results for missions with more resources, more time, and more experienced engineers. However it appears that the correct route is to implement a standard microcontroller, coupled with logical SEU detection and correction algorithms.
This was a subsystem comparison that did not consider systems engineering implications for other subsystems, which would have further solidified the standard programmable controller. With only one set of instructions capable of being implemented, the quality and reliability of all the controlled subsystems would be heightened. These other subsystems would have to lock in technology decisions earlier and have less ability to correct issues through software changes. From a systems engineering (or project management) point of view a greater number of SEUs, additional mass in shielding, or more effort in component layout (takes advantage of shielding effect of tanks, structures, etc.) are all acceptable compromises to an inflexible but radiation harden C&DH subsystem.
Interface Control Document Examples.
Interface Control Documents (ICD) are inherently simple in the fact they record all the places where it is easy (and past experience has proven) to miscommunicate technical specifications and requirements among subsystem teams. Misaligned bolt holes or missing connector pins are costly to fix when final assembly is being made and the project is a few weeks behind schedule in meeting a fixed launch date. The document is another place to double check voltages, fluid flow rates, thermal loads, etc. to ensure compliance to standards and higher-level requirements. The ICD also can help account for the “missing mass” never planned for in the form of wiring, brackets and miscellaneous hardware required in a real working assembly, since rarely are all system components connected end to end.
ICD is not only documentation, but also, hardware designed to check fit and test actual components throughout the development. This is especially helpful in large projects with components and subsystem being developed around the world. Instead of physically putting together parts, an interface hardware model is constructed and tested separately on each part, without impacting either group’s schedule or development process. ICDs apply to ground equipment and wireless communications as well. Anywhere the need to document separate functions that will ultimately interconnect, an ICD can be developed. A wide variety of document forms are show in the following examples. Tables or spreadsheet formats are common but ICDs can include technical documents, pictures, hardware, graphs and specification references.
The systems engineer is responsible for keeping the ICD current and accurate. Subsystem leads will often blindly apply what they have last received as the specifications to which they are designing and building to. Therefore the system engineer must ensure every team gets each and every change update and that conflicts are resolved early in the design phase. There is also the need to ensure layout room, mass, thermal contact, physical rigidity and electrical/data parameters are satisfactory for all subsystems as well as the mission objectives.
ANALOG SIGNAL NAME(INTER-MODULE)
3.3V regulated supply.
CDH MCU ADC (PF0)
5.0V regulated Supply.
CDH MCU ADC (PF1)
TNC power, XCVR power.
CDH MCU ADC (PF2)
nothing directly uses.
via I2C from ADC1 on EPS.
via I2C from ADC1 on EPS.
solar cell output.
via I2C from ADC1 on EPS.
the entire array.
via I2C from ADC1 on EPS.
nothing directly powered from this.
EPS Currents (sent as voltage)
bat1 charging current.
via I2C from ADC3 on EPS.
bat 1 discharging current.
via I2C from ADC3 on EPS.
bat2 charging current.
via I2C from ADC3 on EPS.
bat2 discharging current.
via I2C from ADC3 on EPS.
solar cell array output current.
via I2C from ADC1on EPS.
5.0V bus current draw.
via I2C from ADC1 on EPS.
3.3V bus current draw.
via I2C from ADC2 on EPS.
3.7V bus current draw.
via I2C from ADC2 on EPS.
solar cell 1 current.
via I2C from ADC2 on EPS.
for attitude determination.
solar cell 2 current.
via I2C from ADC2 on EPS.
solar cell 3 current.
via I2C from ADC2 on EPS.
solar cell 4 current.
via I2C from ADC2 on EPS.
solar cell 5 current.
via I2C from ADC2 on EPS.
solar cell 6 current.
via I2C from ADC2 on EPS.
Transceiver supply voltage.
from ADC on COM board.
determine if Transcvr relay works.
all need same reference voltage.
thermistor to ADC0 on CDH.
may have to get rid of some.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermisor to ADC0 on CDH.
thermistor to MCU (PF7)
2ndary rcvr temp.
thermistor to MCU (PF4)
thermistor to MCU (PF5)
thermistor to ADC on (PF6)
Table 4. Example Interface Control Document.
Table 5. MCU/TNC interface.
Table 6. Example Interface Control Document.
Mass Budget Examples.
For aerospace applications, the mass budget spreadsheet is the icon of the system engineer’s toolbox. Because minimizing mass is so critical in aircraft, rocket and spacecraft design, the bottom-line number here is often the most monitored and distressing result in SE. Although of great importance, the mass budget is simple to set up and use. Unfortunately, it is often poorly estimated and certainly the most abused SE tool. Mass is critical, therefore even small errors can lead to big problems in aerospace. A 30% margin is generally added for preliminary mass estimates with more than 50% added to systems or subsystems that are new or have unique technology. Even a spacecraft that is a duplicate of a previous mission will carry as much as a 10% margin, since each mission has unique environments and almost any requirement modification will translate into a mass penalty. Some things are best not burdened with a flat mass margin such as propellants. Such fluids tend to be heavy and normally well known from the trajectory analysis and other factors. In the case of a heavy radiation shield, which is significantly massive, but essentially “dumb mass”, 2% or less margin might be applied.
The systems engineer should not be the clerk or secretary that enters whatever value the subsystems leads produce and then announce the grand total to the team. There is judgment in producing an initial gross mass, assigning a maximum mass allowance for each subsystem and giving guidance on contingency at the part, component or system levels. The total should meet the other mission level criteria such as the launch vehicle limit, the planned payload, or a cost target based on mass. Determining if more advanced technology is required, or reallocation of the mass estimate takes engineering insight. The systems engineer must serve as the “honest broker” to direct the best overall decisions for the team and to work through the inevitable conflict among subsystems.
Table 7. C&DH mass budget example.
Power Budget Examples.
Power budgets are often kept at the subsystem level for convenience, but this is really a systems engineer’s function and concern as much as the mass budget. There are trades made among subsystems, technology selections and the integration of mission operations. There are two key power budgets; the total power (everything thing on) and the operational limits (peak power allowed). Many times the power budget will restrict certain functions and limit operational modes. Transmitting data often is a power intensive task and is limited to certain periods of the mission when other subsystems are shutdown or in idle mode. Earth orbiting satellites will have period of darkness where solar panels no longer provide power and batteries must be reserved for heating crucial spacecraft components. Such trades and operations decisions are at a SE level, even when the power budget is maintained by a single subsystem lead.
In the C&DH budget example below, several power modes or states are listed. Each has its own permissible upper or lower limit and each are accounted for separately. These states correspond to the mission modes and timeline.
Table 8. C&DH Power Budget example.
Data and Link Budget Examples.
Like the power budget, the data and link budgets are often maintained by a subsystem lead, with the responsibility of overall coordination still left at the systems engineer level. The data budget is derived from the time available for communications and the baud rate. There are usually two rates and often two separate systems; one for the uplink or communications to the system, and another for the downlink of data being sent to the ground station. There are many factors to account for and engineering judgment is necessary when making these estimates. The quality of the data, its resolution, error checking, package size and similar criteria are typical tracked parameters in the data and link budget.
Table 9. C&DH data budget example.
Table 10. Link budget example.
Failure Mode Analysis.
Failure mode analysis is a tool for risk management. It is the more rigorous engineering or technical aspect of risk management. A failure mode is the way in which something fails. Every failure has one or more consequences, which are called the failure effects . A failure cause is what induces the failure. After identification of all possible failures the effects of the failure are estimated. Next a mitigation plan is prepared for each potential failure cause.
Failure Modes Analysis on the CubeSat.
Because the cubesat project does not have the funds or the available space to mitigate every potential failure, the system must be redundant, fault tolerant, and able to correct detected errors. Failure Modes Analysis is done to determine what the potential failures are in a design and how to mitigate them. This analysis determines the relation between the failure of a single component and its effects on the system as a whole. We accept that no system is perfect, and so some risks are acceptable. Therefore, mitigation attempts will be focused on those failures, which might cause mission failure. In most cases, a secondary component can be used to mitigate any mission failure. The table immediately following shows the four different ways a component failure can affect the whole system.
Table 11. Failure Mode Analysis, Codes of Severity of a Potential Failure.
Identified Single Point Failures.
A single point failure occurs if the mission fails as a result of a single component on the satellite failing. Simply put, it is hardware that the satellite cannot operate without. It is extremely important to detect and eliminate all single point failures that could arise on AS-I. In an effort to eliminate single point failures, a redundancy philosophy has been adopted. The redundancy philosophy states that all satellite hardware identified as single point failure components must have redundant (secondary) hardware. Generally having redundant hardware means simply having two of the same component where they operate independently. A common variation to this theme of “if it fails, use the other one”, is the hot spare where the second component is active and functioning and continually ready to operate at any moment. Sometimes the units are regularly swapped as the primary unit for other reasons, such as to support battery lifetime or prevent joint seizing in mechanisms.
Other redundancy philosophies include having a totally different piece of hardware as the backup. This may be a low gain antenna serving the housekeeping needs and a high gain antenna for the mission science. However, if one fails the other can perform the same duty, but with diminished total data throughput. As with most aerospace related redundancy schemes, this provides a significant mass savings over having two of both types of communications systems onboard. Another mass saving method is to identify the actual weak point in the subsystem and make the redundancy at the component or part level. A good example is in rocket fluid valves where the valve housing is the heaviest and least likely part to fail. In the past, welding in a second complete valve inline or in parallel was the solution. However, since the failure is almost always in the valve seat (e. g., not closing tight), redundancy can be achieved in duplicating the seat seals and in dual electromagnet actuating coils. This not only saves the mass of a valve housing, but also the cost and risk of two more welds in the system. Redundancy also can be in de-rating components to ensure they last the mission lifetime or in adding margin to a subsystem so if it's redundant part fails, it can “ramp up” to higher performance and diminish or eliminate the loss. A spacecraft power conversion box or transformer might operate as two low-power units supplying the primary instrument high voltage, thus gaining long life with lower risk, but each having the capacity to meet the entire voltage requirement (or the minimum for mission success) should the other component fail. Details on the AS-I redundancy choices for the single point failures are given in each of the subsystem’s documentation. The following components have been identified as single point failure components in the CubeSat:
Power Storage (Li-ion batteries)
Example Failure Mode Analysis – COMM System, C&DH System and Stuctures.
Table 12. Failure mode analysis for Comm system.
Table 13 . Failure mode analysis for C&DH system.
Table 14. Failure mode analysis for Structures system.
Gráfico de Gantt.
A Gantt Chart is a bar chart that can be used to allot time to tasks, schedule reviews, and date milestones. Tasks are the project activities. Tasks have start and end dates (e. g. “Create Structure”). Generally we write the task as a phrase starting with a verb (e. g. “creating product”). Each task has a start time and end time. The chart often needs to be updated since end dates are usually estimates and tend to slide (almost always later than earlier in real practice). Milestones are either checkpoints, due dates, dates for interim goals, or dates of reviews. Since the chart period is usually set in days, weeks, or months, meetings, reviews and milestones appear as single dots or bullets rather than a line for how long the item takes. See Chapter 2 for other examples.
The Gantt Chart is typically one of those SE “tools” that can make the slave of its master in real practice. If you knew exactly what tasks were needed, precisely how long each task really took and the proper order they would be done in, then the Gantt Chart could be properly used as a true gage of the engineering progress. If the real progress was ahead of the Gantt Chart, the team could be assumed exceptional and be given time off. If they were behind, they needed to stay late and get back on track. But that all assumes the chart indicates the true process. Only in rare circumstance is there sufficient information to get even a slightly accurate schedule laid out at the beginning of a technical project. Besides not knowing the design and what might be needed to integrate all the subsystems, things outside the project control are guaranteed to mess up the planning. Typical for aerospace projects are the funding availability (money must be there to hire engineers or pay the contractor), the procurement process (how long it takes to buy something), facility access (testing is often rescheduled due to other projects using a facility) and numerous technical issues. This “measuring stick” of project’s progress is not a precise yardstick that can be used to beat subsystem leads into doing their work! It is more accurately described as a loosely laid out bungee cord to which different people pick up at various points, each effecting the length and alignment of the rather inaccurately spaced measurement marks. Use the chart as a tool to help monitor and roughly gauge where a project stands. It is better than not having any idea of the task timing and it does make the system engineer aware of conflicts and product integration flow. Nonetheless, you could have the most exceptional engineering team that is far behind the original schedule with added tasks and missed milestones and the sorriest group of misfits right on the Gantt Chart schedule. You probably would still prefer to ride in the rocket the first group produced late, than launch on time with the second.
Figure 5. Gantt Chart showing schedule of tasks and their progress.
Outline of a Detailed Systems Engineering Management Plan (SEMP) for a Student Project (NASA, 2002)
The SEMP is a planning document that should be baselined by the Systems Engineer at the end of Phase A, and formally updated as needed thereafter. It primarily schedules activities and reviews, and assigns SE functions to individuals. The level of detail necessary here depends on the size of the team, scope of the project, etc. with the primary sections listed as follows:
(Include Mission Overview, Project schedule with life cycle and reviews.)
2. System Engineering Activities.
(Describe the overall lifecycle including the major systems engineering activities for each phase, irrespective of who does them. Describe critical decisions and activities such as Reviews.)
( Describe methods utilized for communicating systems engineering activities, progress, status and results.)
4. Systems Engineering Functions.
4.1 Mission Objectives.
(The Systems Engineer should be responsible for creating a team who are responsible for Level 1 Requirements and Mission Objectives.)
4.2 Operations Concept Development.
(The Systems Engineer defines who develops the operations concept, what format is planned and when it is due. Define who develops the ground based verification concept, what format is planned and when it is due.)
4.3 Mission Architecture and Design Development.
(Define who develops the Architecture and Design, what format is planned and when it is due. Define who develops and maintains the Product Breakdown Structure.)
4.4 Requirements Identification and Analysis.
(Define who develops the requirements hierarchy, define who is responsible for each part of the hierarchy, define who identifies and is responsible for the crosscutting requirements. Define when requirements identification is due and when formal configuration control is expected to start.)
4.5 Validation and Verification.
(Define who is responsible for the validation and verification activities and how this is accomplished.)
4.6 Interfaces and ICDs.
(Define which ICDs are planned, what interfaces are to be included, who is responsible for developing the ICDs and who has approval and configuration management authority.)
4.7 Mission Environments.
(Define the applicable mission environments, who is responsible for determining the mission specific environmental levels or limits, and how each environmental requirement is to be documented.)
4.8 Resource Budgets and Error Allocation.
( List the resource budgets that Systems Engineering will track, and when they will be placed under formal configuration management. )
4.9 Risk Management.
(Define who is responsible for defining acceptable risk and where this is documented. Define the role of systems engineering in risk management and how the analysis are to be accomplished.)
4.10 System Engineering Reviews.
(Define which system engineering reviews are planned and who is responsible for organizing them.)
5. Configuration Management.
(Define what systems engineering documentation is required and when it is to be placed under formal configuration management. Define the method to archive and distribute System Engineering information generated during the course of the lifecycle.)
6. System Engineering Management.
(Define the Systems Engineering Organization Chart and Job Responsibilities. Define trade studies, who does them and when they are due.)
AUSSP. (2007). AubieSat-1: Auburn's First Student-Built Satellite .
NASA. (1995). NASA Systems Engineering Handbook, SP-610S : PPMI.

System engineering trade study template


U. S. Department of Transportation.
Federal Highway Administration.
1200 New Jersey Avenue, SE.
Washington, DC 20590.
3.9.9 Decision Support/Trade Studies.
Trade studies compare the relative merits of alternative approaches, and so ensure that the most cost-effective system is developed. They maintain traceability of design decisions back to fundamental requirements. Trade studies do this by comparing alternatives at various levels for the system being developed. They may be applied to concept, design, implementation, verification, support, and other areas. They provide a documented, analytical rationale for choices made in system development.
Trade studies can be used in various phases and at different depths throughout the project to select from alternatives or to understand the impact of a decision. The inputs vary depending on what is being analyzed. For example, in concept exploration, the alternatives will be concepts. While, in the design phase, they will be design alternatives. The stakeholders are essential here to define and rate the criteria and to validate the results. The analysis may be done qualitatively, or by a model or simulation.
These inputs will be used only as available.
Project Goals and Objectives drive the selection of alternatives for concepts.
User needs and Concept of Operations drive the selection of alternatives for requirements.
SEMP and Project Plan constrain what may be developed, and define budget and schedule.
Stakeholder involvement provides the key metrics and may suggest alternatives.
Risk assessment evaluates each alternative relative to risk, balanced against effectiveness.
Technical reviews present the results and gather inputs and feedback.
Selection of the best of the alternatives, whether for concept, requirements, design, or implementation, provides a choice based on solid analysis.
Rationale is the documentation of the alternatives compared, the criteria for selection, the analysis methodology, and the conclusions.
First, define the question the trade study is to answer. This may be the selection of the most cost-effective concept or design. It may be to narrow down choices for more detailed evaluation. It may be to demonstrate that the choice made is the best one.
Experienced specialists will draw from the available inputs to identify the key evaluation criteria for the decision under consideration. These are measures of effectiveness, metrics that compare how well alternatives meet the needs and requirements. Examples are capacity [vehicles per hour], response time, throughput, and expandability.
Generally, there are multiple criteria, and so these same experts will assign each of them a relative weighting for relative importance.
Trade study starts with alternative concepts or designs that are to be evaluated. Be sure that all reasonable alternatives are on the table.
Generally, the emphasis is on performance criteria such as speed or effectiveness. For each alternative, the criteria may be evaluated quantitatively or qualitatively, and by such methods as simulation, performance data gathered from similar systems, surveys, and engineering judgment. These disparate evaluations are merged using the weighting factors to give a measure of overall effectiveness for each choice.
Estimate the cost of each alternative: the development cost and the life cycle cost, which includes operation and maintenance. Use the techniques of risk assessment [see Chapter 3.9.4] to compare the alternatives relative to technical or project risk. Determine the impact of each alternative on the schedule. Eliminate those that introduce too much risk of missing deadlines.
Sensitivity analysis may also be used, especially with simulation, to see the effect of changes in sub-system parameters on overall system performance. The sensitivity analysis and the evaluations may suggest other, better alternatives.
Select and document the preferred candidate.
Plotting each alternative's [concept or design] overall effectiveness, based on the combined weighted metrics, against cost, or the other factors, is useful for evaluating the relative merits of each. It supports stakeholders in making a good decision. Document the decision and the rationale behind it, to provide traceability back to the higher-level requirements. This document is also a repository of alternatives, in case a change is needed down the road.
Where do trade studies take place in the project timeline?
Is there a policy or standard that talks about Trade Studies?
FHWA Final Rule requires the analysis of system configurations to meet requirements.
Which activities are critical for the system’s owner to do?
Ensure that the proper stakeholders are involved Suggest or elaborate on decision criteria Review the process and results of the trade studies.
How do I fit these activities to my project? [Tailoring]
The level of each activity should be appropriately scaled to the size of the project and the importance of the issue being traded off. For example, a small project will use qualitative measures and compare a small number of alternatives, and sensitivity analysis. For example, an upgrade to a signal system will trade off features based on stakeholder priorities. A large project may use simulation to analyze key issues and perform sensitivity analysis.
What should I track in this process step to reduce project risks, and get what is expected? [Metrics]
These metrics check whether the set of alternatives is possibly driving a risky solution.
Number of high-risk alternatives selected Number of high-cost alternatives selected Number of selected alternatives that introduce schedule risk.
Percentage of alternatives examined Percentage of planned sensitivity analyses completed.
Checklist: Are all the bases covered?
Has a broad and reasonable selection of alternatives been examined?
Does the rationale for the trade study conclusions flow out of the needs and requirements?
Is the sensitivity of system effectiveness to changes in key parameters well understood?
Is the selection rationale documented?
Are there any other recommendations that can help?
Trade studies should make maximum use of any previous work , but if nothing applicable is available, it will need to include more technical analysis. Often the two methods are combined by using analysis to predict system performance based on that of other systems. For example, well-documented improvements in traffic flow experienced when another agency implemented ramp metering could be combined with local data to predict the potential impact of a local ramp metering system.
Simulation and modeling are tools which provide an objective, quantitative comparison of the merits of the alternatives. They may, for example, predict the effectiveness of each alternative in an operational scenario. These can range from a simple spreadsheet to a full traffic simulation.
A closer look at combining metrics There are usually multiple metrics for evaluating the system based on the various needs that the system is to meet. Generally, they are a mix of positive metrics [more is better such as highway capacity] and negative metrics [less is better such as response time]. They also include both quantitative [e. g., predicted vehicle hours of delay] and qualitative values e. g., [relative rating from 1 to 10]. The units can vary as follows:
vehicles per lane per hour seconds [of workstation response time] minutes [of incident response time] number [of predicted fatalities] % [of time available]
It requires care to combine these into some measure of overall system technical measure, without giving undue weight to one or the other. Chapter 3.9.5 gives a method for doing this.
Making qualitative measures quantitative Often time and available information do not allow a direct quantitative assessment. For example, the design of a regional Advanced Transportation Information System [ATIS] focuses on the key information needs of a large number of agencies in the region. There was very little time to do this prioritization, but there were dozens of documents that the agencies had produced discussing their needs. The approach used was to draw out, from documents, any needs cited. Some agencies listed their "top ten" information needs in rank order. These were assigned 1 to 10 points, depending on their place in the list, 10 being best. If a need was cited without ranking it relative to other needs, it was given a medium rating of 5 points. The total points for any need were then given a metric indicating how many agencies needed the particular information, and how strongly they felt about it.
If workshops are held to collect stakeholders’ preferences, here is a simple way to get their inputs on alternatives. First, discuss the alternatives and their pros and cons. Then, list them on a flipchart and give each participant a few colored adhesive dots. Be sure each participant gets the same number of dots, about 10 – 20% of the number of alternatives. Allow each participant to place their dots next to the choices that they favor, even placing multiple dots against a choice that they particularly like. The number of dots is a metric for stakeholder preference. This type of metric could be used to compare alternatives directly or to determine relative weights for multiple metrics.
Sensitivity analysis Simulation, or other analytical tools, can be used to vary design parameters over their potential values and predict the effect on performance. The "knee of the curve" shows where more stringent design requirements give little system improvement.
In the example chart, the knee of the curve occurs around 15 to 20 for the design parameter [horizontal axis]. There is very little performance improvement [vertical axis] from a more stringent design. Sensitivity analysis can also be done in multiple dimensions to determine, for example, whether money should be spent on improving communications or detectors.
Estimating costs for alternatives There is an art to predicting the cost of a new system. A life cycle cost analyst can do it by extrapolating from existing systems. Qualitative assessments are often sufficient. Examples are high/medium/low, in cost or difficulty to implement. Plotting effectiveness versus cost would support the decision.

Chapter 4: System Engineering Tools.
By David Beale and Joseph Bonometti.
This chapter is meant to provide supplemental examples of application of systems engineering tools which may be needed during the SE design process by a student team. It is certainly not all inclusive of the breadth and sophistication of the available tools in the industry. What is presented here are some core tools, some basic and simplified for easy application to a student project.
It is a good time to reinforce the idea of the art of Systems Engineering as compared to the artist’s “tools”. Practice, understanding, maintenance, and tailoring of any tool set is necessary in all professions, but if the tools become the object of the art, or the end goal in itself, it has failed its master. In today’s technical and highly complicated aerospace field, the danger is the overuse and reliance upon the SE tools to produce a successful product in the eyes of the customer. Only sound engineering and good judgment will triumph in that regard, while over reliance of “the process” only serves to make a slave of the systems engineer and obscure the critical impediment to success through overly strict attention toward filling out documents, making deadlines, adding up budgets and presenting insipid data. Therefore apply the tools with the intent that they serve to:
Aid in communications Prevent “dumb” errors Highlight the most important problems or issues Ensure timelessness Produce the best possible product for the given situation.
Do not treat the tools as:
The most important thing on the project The substitution for good engineering or sound judgment The process to ensure all (particularly design) errors are eliminated What you get paid (or graded) to produce.
Document in outline form.
Product Breakdown Structure (PBS),
Concept of Operations.
Validate and Verify.
Interface Control Document.
Mass, Power, Cost, Link budgets.
Failure Mode Analysis.
Store and baseline documents.
Work Breakdown Structure (WBS), Gantt Chart, SEMP.
Table 1. Systems Engineering Tools. *Other tools are applicable for Architectural Design but not presented in this Chapter, such as functional analysis or functional decomposition (see example in Chapter 2), decision matrices, House of Quality, in addition to software simulation (see example in Chapter 8), prototyping and modeling.
Product Breakdown Structure (PBS)
A product breakdown structure (PBS) is a hierarchical breakdown of the hardware and software products of the project. It is created as part of the SE architectural design function. The following example (Figure 1) comes from (NASA, 2007). These can be created in PowerPoint using the Insert > Diagram > Organizational Chart. Some discretion can be used to combine blocks (most often for smaller projects), or add specificity to others. The PBS communicates the areas to be worked and supports task assignments, budgeting and other SE tool development. For the SOFIA project (Figure 2), the two major subsystems are the Observatory System and the Ground System.
Figure 1. NASA Product Breakdown Structure (PBS) for a launch vehicle.
Figure 2. PBS example for the SOFIA infrared telescope.
Work Breakdown Structure (WBS)
Figure 3 show a Work Breakdown Structure or WBS (a hierarchical division of hardware, software, services and data) for the SOFIA example system of Figure 2. The WBS is a tree of subdivided efforts to achieve the end objective, it should include all work functions. Additional work functions on the WBS not found in a PBS include Project Management, Systems Engineering, etc. (from (NASA, 1995)). The WBS also adds in preparing cost estimates, manpower requirements, etc. This document, like all SE tools, is tailored to the suit the project. It should be useful and applicable to the job and not simply a copy of “the last project’s package” to include in the next review or briefing. A good rule to use with all SE tools: if it has no function to support the project, it should not be used. In the case of small student projects, the learning objective may make the inclusion of otherwise questionable process tools and tracking very important. Furthermore, student projects that are passed from one class or group of students to another make clear documentation through Configuration Management a greater necessity.
Figure 3. SOFIA Project highest-level WBS, and the details of the Observatory System WBS.
The WBS includes many of the same elements as the PBS but adds such things as the management, safety, reliability or other significant oversight activities. It may be thought of as “people” derived components whereas the PBS is more the “things” that comprise the project. Again, the purpose is to communicate the complexity and nature of the project, to stimulate the design and engineering process and to prevent obvious omissions or duplications.
This next WBS is an example of how a traditional WBS can be set up for an unusual or advanced systems engineering task. The Momentum Exchange Electrodynamic Reboost (MXER) system is a 100 km space tether that can pick up a satellite and toss it on a trajectory to GEO, the moon, or interplanetary space. It is reboosted by pushing on the Earth's magnetic field and can be ready to impart momentum to the next satellite launched to LEO. Note how its unique components and needs are captured in the WBS. The “Propagator Code” is a separate computer algorithm that predicts where the tether end will be at rendezvous. Because this was a key factor to the design, control and operations it was a top-level area to be worked. It could have been included as a lower tier item in a traditional avionics or spacecraft control area, but in this technology development project it was best served in breaking it out at the top. How does one know when to do that? That is the art of Systems Engineering! Experience, engineering intuition and consultation with design team members are the typical ways one makes that judgment. It should never be based on the last project’s documentation because it is easy to “cut and paste” and move on to the next SE tool.
Figure 4. MXER (Momentum Exchange Electrodynamic Reboost) Tether Work Breakdown Structure.
A WBS does not have to be a hierarchical chart. See the WBS example in Chapter 2 which is based on a Gantt Chart and a WBS outline structure using MS-Project.
Trade Studies.
A trade study is a tool used to help choose a solution to a problem or to bound the development area for a particular project. They are often used early on in the product development cycle and, if used properly, can be one of the most important engineering tasks in the life cycle of a product. In highly sophisticated projects, or the design of “systems-of-systems,” the trade study is often very complex, with engineering details approaching the actual design process in some areas. These studies are also extremely important in defining and illuminating what factors are truly the most influential to the success of the work. The most surprising outcomes should originate in these early (Phase A or B) trade studies, if not, the surprises will come at the end of a project where the time and money required to make a change is excessive.
If a problem has multiple solutions, a trade study will rank the solutions by giving each solution a numerical value. A simple method is to determine a numerical value for each option. This is often done based on weight factors and a normalization scale for the evaluation criteria. Evaluation criteria are important factors that we want to include in the trade study. Weight factors are used to dictate how important the evaluation criteria are relative to each other. The normalization scale creates a constant interval scale that allows us to set a numerical value for each of the evaluation criteria.
Cost, mass, volume, power consumption, legacy, and ease of use are some basic evaluation criteria (note, depending on the project an additional criteria may be desirable to add, or eliminating one listed here conceivably could be appropriate). It is also very important to understand that the choice of weight factors and normalization scale are extremely important to this process. A great deal of care must be taken when setting these values, because the result can be highly sensitive to the intentional or unintentional bias. For example if one lists cost of insurance when buying a car along with initial purchase price as evaluation criteria, but adds a weighing factor to the insurance much higher than the purchase price, the ranking results of the different car options are markedly different. When the insurance is much less than the purchase price and varies only slightly among the various cars, the results will not yield the best value if what you truly desire is low overall lifecycle cost or the lowest cost per mile.
In some very close trade spaces, the desire maybe to distinguish among various options and a tier weighing scale can be employed (i. e., a weighting scale of 1, 3 and 9 for example). This intentionally inflates the small distinctions among options so that a clear “winner” can be determined. Such schemes should be used extremely carefully and only when options are truly impossible to differentiate.
Simplified Trade Study Steps.
1. Define the problem.
2. Define constraints on the solutions.
3. Find 3-5 solutions.
4. Define evaluation criteria.
5. Define weight factors.
6. Define normalization scale.
7. Fill out trade study (e. g., spreadsheet format)
8. Rank the solutions.
Trade Study Example – Purchasing a Car (from J-M Wersinger and Thor Wilson)
Based on the steps listed above:
1. I want to find what is the best option for a new car.
2. The car must be less than $50,000 and it must be local.
3. A black Civic with 37,000 miles and cost $4,000 in poor condition. A red BMW with 57,000 miles and cost $17,000 in excellent condition. A white Passat with 6,000 miles and cost $7,000 in good condition.
4. Cost, mileage, condition of the car, and color of the car.
5. Assign a weight factor of 3 for cost and condition of the car, 2 for mileage, and 1 for color of the car.
6. Normalization scales:
Condition of the car.
Color of the car.
Table 2. Trade study example.
Trade Study Example – Comparison of Microcontrollers for CubeSat (AUSSP, 2007)
Table 3. Trade study example, microcontroller versus FPGA.
This is a trade study conducted by the C&DH subsystem team to evaluate the relative strengths and weaknesses of standard 8 bit microcontrollers and Antifuse (one-time programmable ) FPGAs (Field Programmable Gate Arrays) in CubeSat applications. While microcontrollers have been used extensively in CubeSat applications, FPGAs are a relatively new technology that may provide a solution to radiation induced single event upsets (SEUs), common in a sun synchronous low earth orbit. Evaluating criteria were chosen such that the trade would compare not a specific piece of hardware from each category, but rather the essential traits of any piece of hardware from each category.
Result: The inherent radiation tolerance of any piece of hardware implemented on a CubeSat is a feature of critical importance. While Antifuse FPGAs appeared to be a solution for SEUs in small satellites, they did not score higher than the more traditional microcontrollers in the trade study. When performance, cost, time, and the scope of implementing each system are compared, FPGAs seem to take only the performance category, with microcontrollers sweeping the others. Even with radiation tolerance at 30% of the total weighted score (an example of biasing the analysis to check for robustness of the result), FPGAs surprisingly trail microcontrollers. Undoubtedly, this type of trade study will have very different results for missions with more resources, more time, and more experienced engineers. However it appears that the correct route is to implement a standard microcontroller, coupled with logical SEU detection and correction algorithms.
This was a subsystem comparison that did not consider systems engineering implications for other subsystems, which would have further solidified the standard programmable controller. With only one set of instructions capable of being implemented, the quality and reliability of all the controlled subsystems would be heightened. These other subsystems would have to lock in technology decisions earlier and have less ability to correct issues through software changes. From a systems engineering (or project management) point of view a greater number of SEUs, additional mass in shielding, or more effort in component layout (takes advantage of shielding effect of tanks, structures, etc.) are all acceptable compromises to an inflexible but radiation harden C&DH subsystem.
Interface Control Document Examples.
Interface Control Documents (ICD) are inherently simple in the fact they record all the places where it is easy (and past experience has proven) to miscommunicate technical specifications and requirements among subsystem teams. Misaligned bolt holes or missing connector pins are costly to fix when final assembly is being made and the project is a few weeks behind schedule in meeting a fixed launch date. The document is another place to double check voltages, fluid flow rates, thermal loads, etc. to ensure compliance to standards and higher-level requirements. The ICD also can help account for the “missing mass” never planned for in the form of wiring, brackets and miscellaneous hardware required in a real working assembly, since rarely are all system components connected end to end.
ICD is not only documentation, but also, hardware designed to check fit and test actual components throughout the development. This is especially helpful in large projects with components and subsystem being developed around the world. Instead of physically putting together parts, an interface hardware model is constructed and tested separately on each part, without impacting either group’s schedule or development process. ICDs apply to ground equipment and wireless communications as well. Anywhere the need to document separate functions that will ultimately interconnect, an ICD can be developed. A wide variety of document forms are show in the following examples. Tables or spreadsheet formats are common but ICDs can include technical documents, pictures, hardware, graphs and specification references.
The systems engineer is responsible for keeping the ICD current and accurate. Subsystem leads will often blindly apply what they have last received as the specifications to which they are designing and building to. Therefore the system engineer must ensure every team gets each and every change update and that conflicts are resolved early in the design phase. There is also the need to ensure layout room, mass, thermal contact, physical rigidity and electrical/data parameters are satisfactory for all subsystems as well as the mission objectives.
ANALOG SIGNAL NAME(INTER-MODULE)
3.3V regulated supply.
CDH MCU ADC (PF0)
5.0V regulated Supply.
CDH MCU ADC (PF1)
TNC power, XCVR power.
CDH MCU ADC (PF2)
nothing directly uses.
via I2C from ADC1 on EPS.
via I2C from ADC1 on EPS.
solar cell output.
via I2C from ADC1 on EPS.
the entire array.
via I2C from ADC1 on EPS.
nothing directly powered from this.
EPS Currents (sent as voltage)
bat1 charging current.
via I2C from ADC3 on EPS.
bat 1 discharging current.
via I2C from ADC3 on EPS.
bat2 charging current.
via I2C from ADC3 on EPS.
bat2 discharging current.
via I2C from ADC3 on EPS.
solar cell array output current.
via I2C from ADC1on EPS.
5.0V bus current draw.
via I2C from ADC1 on EPS.
3.3V bus current draw.
via I2C from ADC2 on EPS.
3.7V bus current draw.
via I2C from ADC2 on EPS.
solar cell 1 current.
via I2C from ADC2 on EPS.
for attitude determination.
solar cell 2 current.
via I2C from ADC2 on EPS.
solar cell 3 current.
via I2C from ADC2 on EPS.
solar cell 4 current.
via I2C from ADC2 on EPS.
solar cell 5 current.
via I2C from ADC2 on EPS.
solar cell 6 current.
via I2C from ADC2 on EPS.
Transceiver supply voltage.
from ADC on COM board.
determine if Transcvr relay works.
all need same reference voltage.
thermistor to ADC0 on CDH.
may have to get rid of some.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermistor to ADC0 on CDH.
thermisor to ADC0 on CDH.
thermistor to MCU (PF7)
2ndary rcvr temp.
thermistor to MCU (PF4)
thermistor to MCU (PF5)
thermistor to ADC on (PF6)
Table 4. Example Interface Control Document.
Table 5. MCU/TNC interface.
Table 6. Example Interface Control Document.
Mass Budget Examples.
For aerospace applications, the mass budget spreadsheet is the icon of the system engineer’s toolbox. Because minimizing mass is so critical in aircraft, rocket and spacecraft design, the bottom-line number here is often the most monitored and distressing result in SE. Although of great importance, the mass budget is simple to set up and use. Unfortunately, it is often poorly estimated and certainly the most abused SE tool. Mass is critical, therefore even small errors can lead to big problems in aerospace. A 30% margin is generally added for preliminary mass estimates with more than 50% added to systems or subsystems that are new or have unique technology. Even a spacecraft that is a duplicate of a previous mission will carry as much as a 10% margin, since each mission has unique environments and almost any requirement modification will translate into a mass penalty. Some things are best not burdened with a flat mass margin such as propellants. Such fluids tend to be heavy and normally well known from the trajectory analysis and other factors. In the case of a heavy radiation shield, which is significantly massive, but essentially “dumb mass”, 2% or less margin might be applied.
The systems engineer should not be the clerk or secretary that enters whatever value the subsystems leads produce and then announce the grand total to the team. There is judgment in producing an initial gross mass, assigning a maximum mass allowance for each subsystem and giving guidance on contingency at the part, component or system levels. The total should meet the other mission level criteria such as the launch vehicle limit, the planned payload, or a cost target based on mass. Determining if more advanced technology is required, or reallocation of the mass estimate takes engineering insight. The systems engineer must serve as the “honest broker” to direct the best overall decisions for the team and to work through the inevitable conflict among subsystems.
Table 7. C&DH mass budget example.
Power Budget Examples.
Power budgets are often kept at the subsystem level for convenience, but this is really a systems engineer’s function and concern as much as the mass budget. There are trades made among subsystems, technology selections and the integration of mission operations. There are two key power budgets; the total power (everything thing on) and the operational limits (peak power allowed). Many times the power budget will restrict certain functions and limit operational modes. Transmitting data often is a power intensive task and is limited to certain periods of the mission when other subsystems are shutdown or in idle mode. Earth orbiting satellites will have period of darkness where solar panels no longer provide power and batteries must be reserved for heating crucial spacecraft components. Such trades and operations decisions are at a SE level, even when the power budget is maintained by a single subsystem lead.
In the C&DH budget example below, several power modes or states are listed. Each has its own permissible upper or lower limit and each are accounted for separately. These states correspond to the mission modes and timeline.
Table 8. C&DH Power Budget example.
Data and Link Budget Examples.
Like the power budget, the data and link budgets are often maintained by a subsystem lead, with the responsibility of overall coordination still left at the systems engineer level. The data budget is derived from the time available for communications and the baud rate. There are usually two rates and often two separate systems; one for the uplink or communications to the system, and another for the downlink of data being sent to the ground station. There are many factors to account for and engineering judgment is necessary when making these estimates. The quality of the data, its resolution, error checking, package size and similar criteria are typical tracked parameters in the data and link budget.
Table 9. C&DH data budget example.
Table 10. Link budget example.
Failure Mode Analysis.
Failure mode analysis is a tool for risk management. It is the more rigorous engineering or technical aspect of risk management. A failure mode is the way in which something fails. Every failure has one or more consequences, which are called the failure effects . A failure cause is what induces the failure. After identification of all possible failures the effects of the failure are estimated. Next a mitigation plan is prepared for each potential failure cause.
Failure Modes Analysis on the CubeSat.
Because the cubesat project does not have the funds or the available space to mitigate every potential failure, the system must be redundant, fault tolerant, and able to correct detected errors. Failure Modes Analysis is done to determine what the potential failures are in a design and how to mitigate them. This analysis determines the relation between the failure of a single component and its effects on the system as a whole. We accept that no system is perfect, and so some risks are acceptable. Therefore, mitigation attempts will be focused on those failures, which might cause mission failure. In most cases, a secondary component can be used to mitigate any mission failure. The table immediately following shows the four different ways a component failure can affect the whole system.
Table 11. Failure Mode Analysis, Codes of Severity of a Potential Failure.
Identified Single Point Failures.
A single point failure occurs if the mission fails as a result of a single component on the satellite failing. Simply put, it is hardware that the satellite cannot operate without. It is extremely important to detect and eliminate all single point failures that could arise on AS-I. In an effort to eliminate single point failures, a redundancy philosophy has been adopted. The redundancy philosophy states that all satellite hardware identified as single point failure components must have redundant (secondary) hardware. Generally having redundant hardware means simply having two of the same component where they operate independently. A common variation to this theme of “if it fails, use the other one”, is the hot spare where the second component is active and functioning and continually ready to operate at any moment. Sometimes the units are regularly swapped as the primary unit for other reasons, such as to support battery lifetime or prevent joint seizing in mechanisms.
Other redundancy philosophies include having a totally different piece of hardware as the backup. This may be a low gain antenna serving the housekeeping needs and a high gain antenna for the mission science. However, if one fails the other can perform the same duty, but with diminished total data throughput. As with most aerospace related redundancy schemes, this provides a significant mass savings over having two of both types of communications systems onboard. Another mass saving method is to identify the actual weak point in the subsystem and make the redundancy at the component or part level. A good example is in rocket fluid valves where the valve housing is the heaviest and least likely part to fail. In the past, welding in a second complete valve inline or in parallel was the solution. However, since the failure is almost always in the valve seat (e. g., not closing tight), redundancy can be achieved in duplicating the seat seals and in dual electromagnet actuating coils. This not only saves the mass of a valve housing, but also the cost and risk of two more welds in the system. Redundancy also can be in de-rating components to ensure they last the mission lifetime or in adding margin to a subsystem so if it's redundant part fails, it can “ramp up” to higher performance and diminish or eliminate the loss. A spacecraft power conversion box or transformer might operate as two low-power units supplying the primary instrument high voltage, thus gaining long life with lower risk, but each having the capacity to meet the entire voltage requirement (or the minimum for mission success) should the other component fail. Details on the AS-I redundancy choices for the single point failures are given in each of the subsystem’s documentation. The following components have been identified as single point failure components in the CubeSat:
Power Storage (Li-ion batteries)
Example Failure Mode Analysis – COMM System, C&DH System and Stuctures.
Table 12. Failure mode analysis for Comm system.
Table 13 . Failure mode analysis for C&DH system.
Table 14. Failure mode analysis for Structures system.
Gráfico de Gantt.
A Gantt Chart is a bar chart that can be used to allot time to tasks, schedule reviews, and date milestones. Tasks are the project activities. Tasks have start and end dates (e. g. “Create Structure”). Generally we write the task as a phrase starting with a verb (e. g. “creating product”). Each task has a start time and end time. The chart often needs to be updated since end dates are usually estimates and tend to slide (almost always later than earlier in real practice). Milestones are either checkpoints, due dates, dates for interim goals, or dates of reviews. Since the chart period is usually set in days, weeks, or months, meetings, reviews and milestones appear as single dots or bullets rather than a line for how long the item takes. See Chapter 2 for other examples.
The Gantt Chart is typically one of those SE “tools” that can make the slave of its master in real practice. If you knew exactly what tasks were needed, precisely how long each task really took and the proper order they would be done in, then the Gantt Chart could be properly used as a true gage of the engineering progress. If the real progress was ahead of the Gantt Chart, the team could be assumed exceptional and be given time off. If they were behind, they needed to stay late and get back on track. But that all assumes the chart indicates the true process. Only in rare circumstance is there sufficient information to get even a slightly accurate schedule laid out at the beginning of a technical project. Besides not knowing the design and what might be needed to integrate all the subsystems, things outside the project control are guaranteed to mess up the planning. Typical for aerospace projects are the funding availability (money must be there to hire engineers or pay the contractor), the procurement process (how long it takes to buy something), facility access (testing is often rescheduled due to other projects using a facility) and numerous technical issues. This “measuring stick” of project’s progress is not a precise yardstick that can be used to beat subsystem leads into doing their work! It is more accurately described as a loosely laid out bungee cord to which different people pick up at various points, each effecting the length and alignment of the rather inaccurately spaced measurement marks. Use the chart as a tool to help monitor and roughly gauge where a project stands. It is better than not having any idea of the task timing and it does make the system engineer aware of conflicts and product integration flow. Nonetheless, you could have the most exceptional engineering team that is far behind the original schedule with added tasks and missed milestones and the sorriest group of misfits right on the Gantt Chart schedule. You probably would still prefer to ride in the rocket the first group produced late, than launch on time with the second.
Figure 5. Gantt Chart showing schedule of tasks and their progress.
Outline of a Detailed Systems Engineering Management Plan (SEMP) for a Student Project (NASA, 2002)
The SEMP is a planning document that should be baselined by the Systems Engineer at the end of Phase A, and formally updated as needed thereafter. It primarily schedules activities and reviews, and assigns SE functions to individuals. The level of detail necessary here depends on the size of the team, scope of the project, etc. with the primary sections listed as follows:
(Include Mission Overview, Project schedule with life cycle and reviews.)
2. System Engineering Activities.
(Describe the overall lifecycle including the major systems engineering activities for each phase, irrespective of who does them. Describe critical decisions and activities such as Reviews.)
( Describe methods utilized for communicating systems engineering activities, progress, status and results.)
4. Systems Engineering Functions.
4.1 Mission Objectives.
(The Systems Engineer should be responsible for creating a team who are responsible for Level 1 Requirements and Mission Objectives.)
4.2 Operations Concept Development.
(The Systems Engineer defines who develops the operations concept, what format is planned and when it is due. Define who develops the ground based verification concept, what format is planned and when it is due.)
4.3 Mission Architecture and Design Development.
(Define who develops the Architecture and Design, what format is planned and when it is due. Define who develops and maintains the Product Breakdown Structure.)
4.4 Requirements Identification and Analysis.
(Define who develops the requirements hierarchy, define who is responsible for each part of the hierarchy, define who identifies and is responsible for the crosscutting requirements. Define when requirements identification is due and when formal configuration control is expected to start.)
4.5 Validation and Verification.
(Define who is responsible for the validation and verification activities and how this is accomplished.)
4.6 Interfaces and ICDs.
(Define which ICDs are planned, what interfaces are to be included, who is responsible for developing the ICDs and who has approval and configuration management authority.)
4.7 Mission Environments.
(Define the applicable mission environments, who is responsible for determining the mission specific environmental levels or limits, and how each environmental requirement is to be documented.)
4.8 Resource Budgets and Error Allocation.
( List the resource budgets that Systems Engineering will track, and when they will be placed under formal configuration management. )
4.9 Risk Management.
(Define who is responsible for defining acceptable risk and where this is documented. Define the role of systems engineering in risk management and how the analysis are to be accomplished.)
4.10 System Engineering Reviews.
(Define which system engineering reviews are planned and who is responsible for organizing them.)
5. Configuration Management.
(Define what systems engineering documentation is required and when it is to be placed under formal configuration management. Define the method to archive and distribute System Engineering information generated during the course of the lifecycle.)
6. System Engineering Management.
(Define the Systems Engineering Organization Chart and Job Responsibilities. Define trade studies, who does them and when they are due.)
AUSSP. (2007). AubieSat-1: Auburn's First Student-Built Satellite .
NASA. (1995). NASA Systems Engineering Handbook, SP-610S : PPMI.

4 ITS Technical Processes.
This document is intended to help you understand how systems engineering can be used throughout the ITS project life cycle. Chapters 4 and 5 present two different types of processes that support systems engineering:
Technical processes, such as system requirements, high-level design, integration, and verification, are described here in Chapter 4. These processes, depicted in the "V" systems engineering model, are performed to develop an ITS project that meets the user's needs. This chapter leads you step by step through each of the technical processes in the "V". Each process is summarized, key activities are identified, and outputs that should be generated are defined. Project management processes, such as project planning, risk management, and configuration management, are described in Chapter 5. These cross-cutting activities are just as important to the success of the project, but they do not appear in the "V" diagram because they apply to many different steps in the "V". These processes are used to plan, monitor, and control the ITS project so that it is completed on time and on budget.
Relationship to Traditional Transportation Processes.
ITS projects are identified and funded through transportation planning and programming/budgeting processes in each state, planning region (e. g., metropolitan planning area), and agency. The "V" diagram and the systems engineering process begin once a need for an ITS project has been identified. The early steps in the "V" define the project scope and determine the feasibility and acceptability as well as the costs and benefits of the project. These early steps actually support planning and programming/budgeting since they are intended to identify high-level risks, benefits, and costs and to determine if the ITS project is a good investment. The latter steps support project implementation, then transition into operations and maintenance, changes and upgrades, and ultimate retirement or replacement of the system. (The systems engineering "V" is placed in context with the traditional transportation project life cycle in Section 6.1.)
Each step of the process that is described in this chapter results in one or more technical outputs. This documentation is used in subsequent steps in the process and provides a critical documentation trail for the project. The documentation that is discussed in this chapter is identified in Table 1, which provides a bird's-eye view of where it fits in the "V". Several resources provide good descriptions and templates for this documentation. 7 Note that not every ITS project will require every document listed in the table. (More information on tailoring is provided later in this chapter and in Section 6.2.3.)
About the Examples.
This chapter is illustrated with real examples that show how different agencies have used the systems engineering process for their ITS projects. These real examples aren't perfect and shouldn't be taken as the only approach, or even the best approach, for accomplishing a particular step. As time goes by and we gain experience using systems engineering on ITS projects, many more examples will become available.
4.1 Using the Regional ITS Architecture.
In this step: The portion of the regional ITS architecture that is related to the project is identified. Other artifacts of the planning and programming processes that are relevant to the project are collected and used as a starting point for project development. This is the first step in defining your ITS project.
Define the project scope while considering the regional vision and opportunities for integration Improve consistency between ITS projects and identify more efficient incremental implementation strategies Improve continuity between planning and project development.
Sources of Information.
Relevant regional ITS architecture(s) Regional/national resources supporting architecture use Other planning/programming products relevant to the project.
Identify regional ITS architecture(s) that are relevant to the project Identify the portion of the regional ITS architecture that applies Verify project consistency with the regional ITS architecture and identify any necessary changes to the regional ITS architecture.
List of project stakeholders and roles and responsibilities List of inventory elements included in or affected by the project List of requirements the proposed system(s) must meet List of interfaces and the information to be exchanged or shared by the system(s) Regional ITS architecture feedback as necessary.
Proceed only if you have:
Demonstrated consistency with the regional ITS architecture and identified needed changes to the regional ITS architecture, if applicable Extracted the relevant portion of the regional ITS architecture that can be used in subsequent steps Reached consensus on the project/system scope.
4.1.1 Overview.
The regional ITS architecture provides a good starting point for systems engineering analyses that are performed during ITS project development. It provides region-level information that can be used and expanded in project development.
When an ITS project is initiated, there is a natural tendency to focus on the programmatic and technical details and to lose sight of the broader regional context. Using the regional ITS architecture as a basis for project implementation provides this regional context as shown in Figure 8. It provides each project sponsor with the opportunity to view their project in the context of surrounding systems. It also prompts the sponsor to think about how the project fits within the overall transportation vision for the region. Finally, it identifies the integration opportunities that should be considered and provides a head start for the systems engineering analysis.
Figure 8: Regional ITS Architecture Framework for Integration.
The regional ITS architecture is a tool that is used in transportation planning, programming, and project implementation for ITS. It is a framework for institutional agreement and technical integration for ITS projects and is the place to start when defining the basic scope of a project.
The regional ITS architecture is the first step in the "V" because the best opportunity for its use is at the beginning of the development process. The architecture is most valuable as a scoping tool that allows a project to be broadly defined and shown in a regional context. The regional ITS architecture step and the concept exploration step that is described in the next section may iterate since different concepts may have different architecture mappings. The initial architecture mapping may continue to be refined and used as the Concept of Operations and system requirements are developed.
The Regional ITS Architecture Guidance Document provides detailed guidance for regional ITS architecture development, use, and maintenance. (Version 2 of this document provides detailed guidance for using a regional ITS architecture to support project implementation.)
4.1.2 Key Activities.
Initial use of the regional ITS architecture requires a few basic activities: locating the right architecture, identifying the portion of the architecture that applies to your project, and notifying the architecture maintainer of any required regional architecture changes. None of these tasks is particularly time consuming – the basic extraction of information can be done in an afternoon, even for a fairly complex project, if you are knowledgeable about the regional ITS architecture. Of course, it can be time consuming to climb the learning curve, and coordinating and building consensus on the scope of the project will require time and effort. Each of the key activities is described in the following paragraphs.
Identify regional ITS architecture(s) that are relevant to the project – First, find the regional ITS architecture that covers the geographic area where your project will be implemented. In some cases, more than one regional ITS architecture may apply. For example, a major metropolitan area may be included in a statewide architecture, a regional architecture for the metropolitan area, and subregional architectures that are developed for a particular agency or jurisdiction. Coordinate with the ITS specialist at the FHWA Division/FTA Regional Office if necessary to sort this out.
In the event that no regional ITS architecture exists at the time that an ITS project is initiated, coordinate with the FHWA Division/FTA Regional Office on starting a regional ITS architecture effort. In the interim, a project-level architecture should be developed based on the National ITS Architecture 8 to support the ITS project.
The systems engineering analysis requirements identified in FHWA Rule 940.11/FTA Policy Section VI require identification of the portion of the regional ITS architecture that is implemented by each ITS project that uses federal funds. If a regional ITS architecture does not exist, then the portion of the National ITS Architecture that will be implemented by the project must be identified.
You should build consensus around the fundamental project scope decisions that are made as the relevant portions of the regional ITS architecture are identified. One good approach is to create a context diagram that shows the ITS system to be implemented in the middle of the diagram surrounded by all other potentially interfacing systems in the region. For example, Figure 9 is a context diagram for the MaineDOT Communications Center. A context diagram can be used to discuss integration opportunities that should be considered in this project and in future projects. A discussion like this puts the ITS project in context and raises awareness of future ITS integration opportunities. It also may highlight regional ITS architecture issues that should be addressed.
In almost every case, the regional ITS architecture will identify potential integration opportunities that will not be included in the current project. Specific integration options may be deferred for many reasons – agencies on both sides of the interface may not be ready, there may not be sufficient funding or time available to implement everything, supporting infrastructure may not yet be completed, a necessary standard may not be available, implementing too much at once may incur too much complexity/risk, etc.
Even if they are deferred, it is important to account for future integration options in the current project design. The ultimate goal is to make ITS deployment as economical as possible by considering how this project will support future projects over time. To support this objective, future integration options that may impact the project design should be identified and considered in the project development. For example, additional stakeholders may be involved in the current project to ensure that future interface requirements are identified and factored into the current project design.
Each region should define a mechanism that allows the project team to provide comments on the architecture with minimal time investment. Project teams that use the architecture will be among the most significant sources for regional ITS architecture maintenance changes, and the region should strive to facilitate this feedback. If your region does not have such a mechanism, consult the Regional ITS Architecture Guidance Document for more information on facilitating architecture use and maintenance in your region.
4.1.3 Outputs.
The first output of this step is the subset of the regional ITS architecture for the ITS project. While the Rule/Policy requires a subset of the regional ITS architecture to be identified, it does not define the components that should be included. You should consult local guidelines or requirements to help make this determination. In most cases, the following components will precisely define the scope of the project: (1) stakeholders, (2) inventory elements, (3) functional requirements, and (4) information flows.
These four components define the system(s) that will be created or impacted by the project, the affected stakeholders, the functionality that will be implemented, and the interfaces that will be added or updated. Other components may be identified, including market packages, roles and responsibilities, relevant ITS standards, and agreements. For very large ITS projects, this might be several pages of information. For a small ITS project, this might fit on a single page. The information that is extracted will actually be used in the concept exploration, Concept of Operations, requirements, and design steps that follow.
The Turbo Architecture software tool can be used to quickly and accurately define an ITS project architecture if the regional ITS architecture was developed with Turbo. Turbo Architecture can be used to generate diagrams and reports that fully document the portion of the regional ITS architecture that will be implemented by the project. Turbo Architecture can also be used to develop a project ITS architecture based on the National ITS Architecture if a regional ITS architecture does not exist. The Turbo Architecture software can be obtained from McTrans by visiting their website at www-mctrans. ce. ufl. edu/featured/turbo/.
If you don't find what you need in the regional ITS architecture, then you should add the missing or changed items to your architecture subset and highlight them so it is clear what you changed. For example, if there is a system in your project that is not represented in the regional ITS architecture, add it to your architecture subset and highlight it. The highlighted changes serve two purposes: they allow you to move forward with an augmented architecture subset that you can use in the next steps of the process, and they provide the basis for your feedback for regional ITS architecture maintenance.
The second output of this step – feedback to the regional ITS architecture maintenance team – is just as important as the first output. Submit any recommended changes using the mechanism defined for your region in the regional ITS architecture maintenance plan.
4.1.4 Examples.
The subset of the regional ITS architecture that is included in the project can be shown in a series of simple tables and/or a diagram from Turbo Architecture, as shown in Figure 9. This figure identifies the inventory elements and interfaces that will be implemented by a MaineDOT Dynamic Message Sign (DMS) project in which several signs will be installed in Portland, Maine, along with a central control system with interfaces to a number of other centers. Functional requirements that are relevant to the project were also extracted, as shown in Table 2.
Figure 9: Example: MaineDOT DMS Project Architecture Subset.
4.2 Feasibility Study/Concept Exploration.
In this step: A business case is made for the project. Technical, economic, and political feasibility is assessed; benefits and costs are estimated; and key risks are identified. Alternative concepts for meeting the project's purpose and need are explored, and the superior concept is selected and justified using trade study techniques.
Identify superior, cost-effective concept, and document alternative concepts with rationale for selection Verify project feasibility and identify preliminary risks Garner management buy-in and necessary approvals for the project.
Sources of Information.
Project goals and objectives Project purpose and need Project scope/subset of the regional ITS architecture.
Define evaluation criteria Perform initial risk analysis Identify alternative concepts Evaluate alternatives Document results.
Feasibility study that identifies alternative concepts and makes the business case for the project and the selected concept.
Proceed only if you have:
Received approval on the feasibility study from project management, executive management, and controlling authorities, as required Reached consensus on the selected alternative.
4.2.1 Overview.
In this step, the proposed ITS project is assessed to determine whether it is technically, economically, and operationally viable. Major concept alternatives are considered, and the most viable option is selected and justified. While the concept exploration should be at a fairly high level at this early stage, enough technical detail must be included to show that the proposed concept is workable and realistic. The feasibility study provides a basis for understanding and agreement among project decision makers – project management, executive management, and any external agencies that must support the project, such as a regional planning commission.
The Rule/Policy requires the systems engineering analysis to include an analysis of alternative system configurations and technology options. The focus of this Rule/Policy requirement is on design decisions that are made later in the process, but a fundamental analysis of basic systems configurations is performed in this step.
It is easy to confuse the concept exploration that is performed in this step with the Concept of Operations that is developed in the next step. Concept exploration is a broad assessment of fundamentally different alternatives – for example, a new electronic toll facility versus additional conventional lanes. The alternatives would have dramatically different concepts of operations, so it is important to select a proposed concept before developing a Concept of Operations. Different alternatives may also have different regional ITS architecture mappings, so this step may iterate with the previous regional ITS architecture step.
The process is driven by the project vision, goals, and objectives, and by the needs for the project that were identified through the transportation planning process. It starts by identifying a broad range of potential concepts that satisfy the project need(s). The concepts are compared relative to measures that assess the relative benefits, costs, and risks of each alternative. Project stakeholders must be involved to establish the evaluation criteria, verify that all viable alternative concepts are considered, and make sure there is consensus on the selected alternative. The recommendations provide a documented rationale for the selected project approach and an assessment of its feasibility. The process is identical to a feasibility study done for large roadway and transit projects.
The alternatives analysis that is performed during a feasibility analysis uses a basic trade study technique, shown in Figure 10, that will be repeated many times during the project life cycle. At this early concept exploration step, the alternatives are fundamental choices, such as to maintain the existing facility ("do nothing"), build a new road, or add ITS technology to the existing facility. During design, the alternatives are design decisions, such as whether signs should be located at location A, B, or C. During construction, alternatives may have to do with optimizing closures while the work is performed. At each step, a set of alternatives is identified and analyzed from technical, economic, and operational perspectives.
Figure 10: Concept Exploration Uses Basic Trade Study Techniques.
A feasibility study should be conducted only when a broad analysis is needed before the commitment of development resources. Some states require a feasibility study for certain ITS projects. A feasibility study is typically not required for smaller, incremental ITS projects where there are not fundamentally different approaches for implementation and where feasibility is not in question – for example, a project that adds DMS to an existing system. In other cases, a broad exploration of alternatives is not warranted but a cost-benefit study is needed to make the business case for the project.
4.2.2 Key Activities.
Here at the very beginning of project development, the unknowns will certainly outnumber the knowns. Without a Concept of Operations or requirements, many assumptions will have to be made. It is important to educate the group performing this assessment on the concept exploration process and to set a schedule – otherwise, this stage could be an open-ended process since there's always something new over the horizon. The process activities are:
Define evaluation criteria – Work with stakeholders to clearly define the problem or opportunity that is to be addressed by this project, elaborating on the identified purpose and the need(s), goals, and objectives as necessary. These elements were originally defined through the transportation planning and programming process that identified the project. The inputs may be augmented by the portion of the regional ITS architecture that was identified in the previous step.
Based on the statement of the problem, establish cost constraints and any other constraints that will be used to limit the acceptable alternatives. Determine how success will be measured – the degree to which the project will solve the stated problem or realize the identified opportunity. These measures should be included in the criteria that will be used to evaluate the alternative concepts. Also, do a preliminary risk analysis to identify issues and obstacles that may affect the project, and develop evaluation criteria that will measure the sensitivity of each candidate solution to each of the risks.
It is a good idea to define evaluation criteria before alternative concepts are enumerated. By developing the criteria first, you reduce the risk of intuitively settling on an alternative and then subconsciously biasing the criteria toward the preferred alternative. It is important to develop the criteria so that they are not preferential to one of the concepts.
If you find that you are identifying specific products or vendors as the alternative solutions, you are being too specific. A trade comparison of products or vendors occurs much later in the process based on defined requirements during design. The alternatives here should be high-level concepts – for example, instrumentation with traffic detectors versus use of traffic probes to support traffic data collection for a corridor. Alternatives may also reflect life-cycle options, such as leased versus owned equipment, contracted versus in-house staffing, etc. You may have to establish a basic architecture and a minimal strawman design to support the analysis, but do no more than is necessary to support the evaluation.
A common pitfall in developing a concept exploration or any trade study comparison is the premature selection of an alternative early in the study process. Be sure to keep an open mind and spend enough time on all viable options. If only one of the alternatives is defined in detail in a concept exploration, it creates the appearance that the other alternatives were not earnestly considered or explored.
A number of tools support cost-benefit analysis for ITS projects:
The ITS Costs database contains estimates that can be used for policy analysis and cost-benefit analysis. It contains unit cost estimates for more than 200 ITS technologies as well as system costs for selected ITS deployments. (The unit cost database is available online and as an Excel spreadsheet at itscosts. its. dot. gov.) The ITS Benefits database contains information regarding the impacts of ITS projects on the operation of the surface transportation system. The ITS Benefits website provides an online and Excel spreadsheet version of this database as well as several other documents pertaining to ITS benefits. (See itsbenefits. its. dot. gov.) The ITS Deployment Analysis System (IDAS) is software developed by the Federal Highway Administration that can be used to estimate the benefits and costs of ITS investments, which are either alternatives to or enhancements of traditional highway and transit infrastructure. IDAS can currently predict relative costs and benefits for more than 60 types of ITS investments. (See idas. camsys/.) SCRITS (SCReening for ITS) is a spreadsheet analysis tool for estimating the user benefits of ITS. It is intended as a sketch - or screening-level analysis tool for allowing practitioners to obtain an initial indication of the possible benefits of various ITS applications. (For more information, see https://fhwa. dot. gov/steam/scrits. htm.)
While it is best to do a complete analysis of every alternative, sometimes the sheer number of alternatives makes this thorough approach impractical. One common practice is to apply the evaluation criteria in stages, weeding out the alternatives that don't meet the fundamental criteria so that the more detailed, time-consuming analysis is performed on only a few of the most viable alternatives. The evaluation should be validated by reviewing the analysis with stakeholders who may have reasonable objections to certain assumptions and alternatives.
Remember your audience when writing a feasibility study; this study makes a business case primarily for a management audience. Any feature of the study that prevents the reader from assimilating the costs and benefits and the associated risks of each alternative solution as briefly, completely, and painlessly as possible reduces the effectiveness of the study for the audience.
Several review cycles may be required for the feasibility study. First, the document should be circulated among the project team to make sure that there is buy-in. Then an updated draft should be distributed to internal management and other organizations for approval.
4.2.3 Outputs.
The feasibility study establishes the business case for investment in a project by defining the reasons for undertaking the project and analyzing its costs and benefits. Different organizations and different projects will have different requirements, but a feasibility study should contain, at a minimum, the following:
A description of the problem or opportunity that the project is intended to address. The project objectives that must be achieved for an alternative to be an effective response to the problem or opportunity, and the evaluation criteria that were used. Economic and risk analyses of each of the alternatives that meet the established objectives, and the reasons for rejecting the alternatives that were not selected. A summary description of the selected alternative, including the major system features and resources that will be used. An economic analysis of the funding sources, the life-cycle costs and benefits of the project, and the life-cycle costs and benefits of the current method of operation.
4.2.4 Examples.
Identification of Alternatives – Transportation Planning Studies.
Feasibility studies that examine alternative concepts are frequently done for large transportation projects as part of corridor studies, major investment studies, and environmental analysis reports. The ITS option(s) in these studies often compete with traditional capital improvement options; hybrid options, which include a mix of technology and traditional capital improvements, are also considered.
For example, a congested corridor in Collin County, Texas, was the subject of a feasibility study report (FSR) 9 that was prepared by representatives from the North Central Texas Council of Governments and affected agencies. This FSR examined the following alternatives: (1) do nothing, (2) build a new freeway, (3) build a toll road with electronic collection (two alternatives), and (4) build managed lanes. One summary table that compared the traffic volumes supported by the different alternatives is shown in Table 3. Supported traffic volumes, estimated capital costs, and potential revenue generation were used to compare the alternatives. The analysis favored the electronic toll alternatives.
Broad alternatives analyses like these are included in many planning studies.
Minnesota DOT developed a guidance document 10 for cost-benefit analysis in 2005 that includes several illustrative examples. Generally, higher-level graphics that visually compare the costs and benefits of the alternatives, like the one shown in Figure 11, are used in the body of the cost-benefit analysis. More detailed computation that supports high-level graphics, like the table reproduced in Table 4, is included in appendices.
Figure 11: Example of High-Level Economic Comparison of Alternatives.
4.3 Concept of Operations.
In this step: The project stakeholders reach a shared understanding of the system to be developed and how it will be operated and maintained. The Concept of Operations (ConOps) is documented to provide a foundation for more detailed analyses that will follow. It will be the basis for the system requirements that are developed in the next step.
High-level identification of user needs and system capabilities in terms that all project stakeholders can understand Stakeholder agreement on interrelationships and roles and responsibilities for the system Shared understanding by system owners, operators, maintainers, and developers on the who, what, why, where, and how of the system Agreement on key performance measures and a basic plan for how the system will be validated at the end of project development.
Sources of Information.
Stakeholder lists, roles and responsibilities, and other components from the regional ITS architecture Recommended concept and feasibility study from the previous step Broad stakeholder input and review.
Identify the stakeholders associated with the system/project Define the core group responsible for creating the Concept of Operations Develop an initial Concept of Operations, review with broader group of stakeholders, and iterate Define stakeholder needs Create a System Validation Plan.
Concept of Operations describing the who, what, why, where, and how of the project/system, including stakeholder needs and constraints System Validation Plan defining the approach that will be used to validate the project delivery.
Proceed only if you have:
Received approval on the Concept of Operations from each stakeholder organization Received approval on the System Validation Plan from each stakeholder organization.
4.3.1 Overview.
The Concept of Operations (ConOps) is a foundation document that frames the overall system and sets the technical course for the project. Its purpose is to clearly convey a high-level view of the system to be developed that each stakeholder can understand. A good ConOps answers who, what, where, when, why, and how questions about the project from the viewpoint of each stakeholder, as shown in Figure 12.
Who – Who are the stakeholders involved with the system? What – What are the elements and the high-level capabilities of the system? Where – What is the geographic and physical extent of the system? When – What is the sequence of activities that will be performed? Why – What is the problem or opportunity addressed by the system? How – How will the system be developed, operated, and maintained?
Figure 12: Concept of Operations (Adapted from ANSI/AIAA-G-043-1992)
In ITS, we draw a distinction between an Operational Concept , which is the high-level description of roles and responsibilities that is included in the regional ITS architecture, and a Concept of Operations, which is the more detailed, multifaceted document described in this section.
Don't assume that a new ConOps is required for every ITS project. A single system-level ConOps can support many ITS projects that incrementally implement and extend a system. For example, a ConOps may be developed for a large transportation management system. This system may be implemented and expanded with numerous ITS projects over several years. Once the ConOps is developed, it may be reviewed and used with relatively minor updates for each of the projects that incrementally implement the transportation management system.
4.3.2 Key Activities.
Although there is no single recipe for developing a ConOps, successful efforts will include a few key activities:
Identify the stakeholders associated with the system/project – Systems engineering in general, and this effort in particular, require broad participation from the project's stakeholders. One of the first steps in developing a ConOps is to make sure that all the stakeholders involved in or impacted by the project – owners, operators, maintainers, users, and so forth – are identified and involved. You can start with the stakeholder list from the regional ITS architecture and then expand it to identify the more specific organizations – divisions and departments – that should be involved. One of the most effective ways to involve the stakeholders is to create an integrated product team (IPT) that brings together the necessary expertise and provides a forum for all project stakeholders. Define the core group responsible for creating the ConOps – Although broad involvement is critical, you can't have 20 people on your writing team. Select a few individuals who are responsible for capturing and documenting the vision of the broader group. Depending on the size of the project and staff capabilities, this team might include a consultant or staff members with knowledge of the project and requisite writing and communications skills.
If you hire a consultant, don't assume that is the end of your responsibility for ConOps development. The stakeholders are the foremost experts on their needs and must be materially involved in the ConOps development. The consultant can provide technical expertise on what should be in a ConOps, facilitate the meetings and outreach activities, prepare the document, and coordinate the review, but the stakeholders' concept should be documented in the end. The stakeholders should consider the ConOps their document, not the consultant's document.
The best person to write the ConOps may not be the foremost technical expert on the proposed system. Stakeholder outreach, consensus building, and the ability to understand and clearly document the larger picture are key.
Portions of the ConOps can often be created from existing documents. For example, the regional ITS architecture identifies stakeholder roles and responsibilities that can be used. A feasibility study, project study report, or other preliminary study documentation may provide even more relevant information. A project application form used to support project programming will normally include goals, objectives, and other information that should be reflected in the ConOps for continuity.
Operational scenarios are an excellent way to work with the stakeholders to define a ConOps. Scenarios associated with a major incident, a work zone, or another project-specific situation provide a vivid context for a discussion of the system's operation. It is common practice to define several scenarios that cover normal system operation (the "sunny day" scenario) as well as various fault-and-failure scenarios.
A System Validation Plan is prepared that defines the consensus validation approach and performance measures. As with the ConOps, all affected stakeholder organizations should formally approve the System Validation Plan at this early stage so that downstream, all will agree on when they can "declare victory" that the new system is the right system. The plan will be finalized during system validation (see Section 4.9.2).
4.3.3 Output.
The ConOps should be an approachable document that is relevant to all project stakeholders, including system operators, maintainers, developers, owners/decision makers, and other transportation professionals. The art of creating a good ConOps lies in using natural language and supporting graphics so that it is accessible to all while being technically precise enough to provide a traceable foundation for the requirements document and the System Validation Plan.
The ConOps is not a requirements document that lists the detailed, testable requirements for the system, nor is it a design document that specifies the technical design or technologies to be used. Resist the temptation to predetermine the solution in the ConOps – you should not unnecessarily preclude viable options at this early step. You also want to "keep it simple" and refrain from using formalized, highly structured English that is more suitable for the requirements and design specifications that follow.
Done right, the ConOps will be a living document that can be revised and amended so that it continues to reflect how the system is really operated. Later in the life cycle, an up-to-date ConOps can be used to define changes and upgrades to the system.
Two different industry standards provide suggested outlines for Concepts of Operations: ANSI/AIAA-G-043-1992 and IEEE Std 1362-1998, as shown in Figure 13. Both outlines include similar content, although the structure of the IEEE outline lends itself more to incremental projects that are upgrading an existing system or capability. The ANSI/AIAA outline is focused on the system to be developed, so it may lend itself more to new system developments where there is no predecessor system. Successful ConOps have been developed using both outlines. Obtain a copy of both, and make your own choice if you need to develop a ConOps.
Figure 13: Industry-Standard Outlines for Concept of Operations.
Graphics should be used to highlight key points in the ConOps. At a minimum, a system diagram that identifies the key elements and interfaces and clearly defines the scope of the project should be included. Tables and graphics can also be a very effective way to show key goals and objectives, operational scenarios, etc.
The Rule/Policy requires identification of participating agency roles and responsibilities as part of the systems engineering analysis for ITS projects. It also requires that the procedures and resources necessary for operations and management of the system be defined. These elements are initially defined and documented for the project as part of the ConOps. In the ANSI/AIAA standard outline, most of these elements fit under Chapter 3 (User-Oriented Operational Description). In the IEEE outline, the current system information is included in Chapter 3 and the proposed system information is in Chapter 5.
The System Validation Plan that is created during this step should describe how the final system will be measured to determine whether or not it meets the original intent of the stakeholders as described in the ConOps. (For further details and examples, see Section 4.9.)
4.3.4 Examples.
Many Concepts of Operations have been generated for all types of ITS projects in the last five years. Excerpts from a few examples are included here to show some of the ways that key elements of the ConOps have been documented for ITS projects following the sequence from the ANSI/AAIA outline.
User-Oriented Operational Description (Roles and Responsibilities)
Typically, roles and responsibilities are documented as a list or in tabular form. Table 5 is an excerpt of a table from the California Advanced Transportation Management System (CATMS) ConOps that is structured to show shared responsibilities and to highlight coordination points between the different system stakeholders. This early documentation of "who does what" grabs the stakeholders' attention and supports development of system requirements and operational agreements and procedures in future steps.
The system overview is typically supported by one or more diagrams that show the scope, major elements, and interrelationships of the system. Many types of diagrams can be used, from simple block diagrams to executive-level graphics-rich diagrams. Figure 14 is an example of a high-level graphic that includes basic process flow information, roles and responsibilities, and interfaces, providing an "at a glance" overview of the major facets of the system.
Figure 14: Example of System Overview Graphic.
(from Communicating with the Public Using ATIS During Disasters Concept of Operations)
In operational scenarios, the ConOps takes the perspective of each of the stakeholders as different scenarios unfold that illustrate major system capabilities and stakeholder interactions under normal and stressed (e. g., failure mode) circumstances. The stakeholders walk through the scenario and document what the agencies and system would do at each step.
Figure 15 shows an example of a scenario that includes some realistic detail that help stakeholders immerse themselves in the scenario and visualize system operation. This is one of five scenarios that were developed for the City of Lincoln StarTRAN AVL system to show the major system capabilities and the interactions between the AVL system and its users and other interfacing systems.
Figure 15: Operational Scenario Description 11.
4.4 System Requirements.
In this step: The stakeholder needs identified in the Concept of Operations are reviewed, analyzed, and transformed into verifiable requirements that define what the system will do but not how the system will do it. Working closely with stakeholders, the requirements are elicited, analyzed, validated, documented, and baselined.
Develop a validated set of system requirements that meet the stakeholders' needs.
Sources of Information.
Concept of Operations (stakeholder needs) Functional requirements, interfaces, and applicable ITS standards from the regional ITS architecture Applicable statutes, regulations, and policies Constraints (required legacy system interfaces, hardware/software platform, etc.)
Elicit requirements Analyze requirements Document requirements Validate requirements Manage requirements Create a System Verification Plan Create a System Acceptance Plan.
System Requirements document System Verification Plan Traceability Matrix System Acceptance Plan.
Proceed only if you have:
Received approval on the System Requirements document from each stakeholder organization, including those that will deploy, test, install, operate, and maintain the new system Received approval on the System Verification Plan from the project sponsor, the test team, and other stakeholder organizations Received approval on the System Acceptance Plan from the project sponsor, the Operations & Maintenance (O&M) team, and other stakeholder organizations.
4.4.1 Overview.
One of the most important attributes of a successful project is a clear statement of requirements that meet the stakeholders' needs. Unfortunately, creating a clear statement of requirements is often much easier said than done. The initial list of stakeholder needs that are collected will normally be a jumble of requirements, wish lists, technology preferences, and other disconnected thoughts and ideas. A lot of analysis must be performed to develop a good set of requirements from this initial list.
Figure 16: Requirements Engineering Activities.
EIA-632 12 defines requirement as "something that governs what, how well, and under what conditions a product will achieve a given purpose." This is a good definition because it touches on the different types of requirements that must be defined for a project. Functional requirements define "what" the system must do, performance requirements define "how well" the system must perform its functions, and a variety of other requirements define "under what conditions" the system must operate. Requirements engineering covers all of the activities needed to define and manage requirements that are shown in Figure 16.
Specify What, Not How. Be sure to keep the definition of a requirement in mind as you develop your system requirements. Many requirements documents contain statements that are not requirements. One of the most common pitfalls is to jump to a design solution and then write "requirements" that define how the system will accomplish its functions. Specify what the system will do in the system requirements, and save how the system will do it for the system design step.
It is important to involve stakeholders in requirements development. Stakeholders may not have experience in writing requirements statements, but they are the foremost experts concerning their own requirements. The project requirements ultimately are the primary formal communication from the system stakeholders to the system developer. The project will be successful only if the requirements adequately represent stakeholders' needs and are written so they will be interpreted correctly by the developer.
In the effort to get stakeholders involved, make sure you don't sour them on the project by making unreasonable demands on their time or putting them in situations where they can't contribute. Many nontechnical users have been subjected to stacks of detailed technical outputs that they can't productively review. Sooner or later, the user will wave the white flag in this situation and become unresponsive. You must (1) pick your stakeholders carefully and (2) make participation as focused and productive as possible.
The Requirements step is an important one that you shouldn't skimp on. Every ITS project should have a documented set of requirements that are approved and baselined. Of course, this doesn't mean that a new requirements specification must be written from scratch for every project. Projects that enhance or extend an existing system should start with the existing system requirements. This doesn't have to be a particularly large document for smaller ITS projects. The system requirements specification for a recent website development project was less than 20 pages.
4.4.2 Key Activities.
There isn't one "right" approach for requirements development. Different organizations develop requirements in different ways. Even in the same organization, the requirements development process for a small ITS project can be much less formal than the process for the largest, most complex ITS projects. The differences are primarily in the details and in the level of formality. All requirements development processes should involve elicitation, analysis, documentation, validation, and management activities. Note that each of these activities is highly iterative. In the course of a day, a systems engineer may do a bit of each of the activities as a new requirement is identified, refined, and documented.
Elicit requirements – Building on the stakeholders' needs and other inputs, such as the functional requirements from the regional ITS architecture and any relevant statutes, regulations, or policies, define a strawman set of system requirements and review and expand on them, working closely with the project stakeholders. There are many different elicitation techniques that can be used, including interviews, scenarios (see discussion under Concept of Operations in Section 4.3), prototypes, facilitated meetings, surveys, and observations. These techniques can be used in combination to discover the stakeholders' requirements.
Elicit and elicitation are words you may not run into every day. Elicit means to draw forth or to evoke a response. This is the perfect word to use in this case because you will have to do some work to draw out the requirements from the stakeholders and any existing documentation. More work is implied by "elicit requirements" than if we said "collect requirements" or even "identify requirements", and this is intended.
Make sure that you have the right stakeholders involved. This means not only the right organizations but also the right individuals within them. For example, it isn't enough to engage someone from the maintenance organization – it should be an electrical maintenance person who has experience with ITS equipment maintenance for ITS projects. Furthermore, as we move through the steps in the process and the products become more technical, different stakeholders may be involved. Managers may be more involved in the Concept of Operations, while technical staff will be more involved in review of the system requirements and high-level design. Finding individuals with the right combination of knowledge of current operations, vision of the future system, and time to invest in supporting requirements development is one of the key early challenges in any requirements development effort.
There are many techniques for working with stakeholders to get to the fundamental requirements of the system. The Florida SEMP 13 highlights one of the best and simplest techniques – the "Five Whys" & ndash; that was popularized by Toyota in the 1970s. Using the Five Whys technique, you look at an initially stated need and ask "Why?" repeatedly, not unlike a curious four-year-old, until you find the real underlying requirements. The dialog in Table 6 is an example that is based on an actual conversation.
Of course, you sometimes need to direct the conversation by asking more than "why" to use this technique effectively. In the example, the conversation could easily have veered off to a discussion of the user's love for Starbucks coffee. Five iterations is a good rule of thumb, but it may take fewer or more iterations – the idea is to be persistent until you get to the core issues. Note also that the dialog can be internal – the stakeholder could have sat down and asked herself "Why", using the same technique to get at her underlying needs.
As you gather the requirements, be sure to look beyond the operational requirements for the system and cover the complete life cycle (system development, deployment, training, transition, operations and maintenance, upgrades, and retirement) as well as requirements such as security and safety. More than one ITS project has failed because the security requirements of public safety stakeholders were not captured and reflected in the ITS project requirements. A good system requirements template can be used as a checklist to help ensure that all types of requirements are considered.
The best way to start writing requirements is to use just two words: a verb and a noun. For example, the user requirement "monitor road weather conditions" would yield system requirements such as "shall detect ice", "shall monitor wind speed", and "shall monitor pavement temperature". Performance requirements would define the different kinds of ice conditions and the range of wind speeds and pavement temperatures.
Requirements are normally defined in a requirements hierarchy in which the highest-level "parent" requirements are supported by more detailed "child" requisitos. A hierarchy allows you to start with high-level requirements and work your way down to the details. The highest-level requirements should trace to stakeholder needs in the Concept of Operations. A hierarchy is a useful organizational structure that makes it easier to write and review requirements and to manage the requirements development activity. An example of a requirements hierarchy is given in Figure 17.
Figure 17: Example of Hierarchy of High-Level and Detailed Requirements.
For larger systems, it can be very difficult to "get your arms around" all of the requirements. Requirements modeling tools provide a graphic way to define requirements so that they are easier to understand and analyze. These tools are particularly useful for more complex ITS projects. There are numerous requirements modeling tools and techniques available that can help you model the system as part of the analysis process. INCOSE maintains a data repository of available modeling tools that is available on its website 14 .
A model is a representation of something else. There are physical models, like the scale model of a train, and more abstract models, like an architectural plan for a new building. Many different models of the system to be built can be created and used as part of the systems engineering process. During requirements analysis, logical models are used that describe what the system will do. Later, during system design, physical models are created that show how the system will be implemented.
Requirements modeling is an iterative process. Draft models can be developed early in the process based on the Concept of Operations and the regional ITS architecture. These models are refined as they are used to support requirements elicitation and walkthroughs, keeping bounds on the system and reducing requirements creep.
The requirements documentation should include more than requirements. There are many different attributes that should be tracked for each requirement. A rich set of attributes is particularly important for large, complex projects. If you are developing such a project, consider specifying the following for each requirement: requirement number, source, author, creation date, change history, verification method, priority, and status. The historical and change-tracking attributes are particularly important since they allow management to measure and track requirements stability.
Traceability is another important aspect of requirements documentation. Each requirement should trace to a higher-level requirement, a stakeholder need, or other governing rules, standards, or constraints from which the requirement is derived. As the system is developed, each requirement will also be traced to the test case that will verify it, to more detailed "child" requirements that may be derived from it, and to design elements that will help to implement it. Establish and populate the Traceability Matrix at this stage, and continue to populate it during development. The Traceability Matrix is a vital document that is maintained to the end of project development, allowing traceability from user needs to the system components, verification, and validation.
You will see "validation" used in a few different contexts in systems engineering. Here in requirements validation, you make sure that the requirements are correct and complete. Later, in system validation (discussed in Section 4.9), you make sure that you have built the right system. In fact, the requirements validation that is performed here will ultimately help to make sure that the system validation is successful in the end.
A walkthrough is a technique in which a review team steps through a deliverable (e. g., requirements, design, or code) looking for problems. A walkthrough should be relatively informal and "blame free" to maximize the number of problems that are identified. A requirements walkthrough should be attended by the people that have a vested interest in the requirements. For a large project, this might include the requirements author, customer, user representative(s), implementers, and testers.
Table 7 identifies an oft-repeated list of attributes of good requirements. As part of the validation process, you do your best to make sure that the requirements have all of these desired attributes. Unfortunately, computers can do only a fraction of this validation and people have to do the rest. Techniques for validating a requirement against each of these quality attributes are also shown in Table 7. An attribute list like this can be converted into a checklist that prompts reviewers to ask themselves the right questions as they are reviewing the requirements.
Every ITS project should have a tool that helps to manage the requirements baseline. More complex ITS projects will benefit from a tool specifically for requirements management such as DOORS or Requisite-Pro. 15 A professional requirements management tool is expensive, but it includes a long list of capabilities including change management, requirements attributes storage and reporting, impact analysis, requirements status tracking, requirements validation tools, access control, and more.
Like the other requirements engineering activities, the requirements management capabilities should be scaled based on the complexity and size of the ITS project. Requirements for smaller ITS projects can be managed easily and effectively by a single engineer using a general purpose tool like Microsoft Access or Excel.
4.4.3 Outputs.
No matter how you developed your requirements, you must document them in some consistent, accessible, and reviewable way. The requirements development process may result in several different levels of requirements over several steps in the "V" & ndash; stakeholder requirements, system requirements, subsystem requirements, etc. – that may be documented in several different outputs. For example, stakeholder requirements might be documented in a series of Use Cases; system requirements, in a System Requirements Specification; and subsystem requirements, in subsystem specifications. All of these requirements should be compiled in a single repository that can be used to manage and publish the requirements specifications at each stage of the project.
It is much easier to use a standard template for the requirements specifications than it is to come up with your own, and numerous standard templates are available. If your organization does not have a standard requirements template, you can start with a standard template like the one contained in IEEE Standard 830 (for software requirements specifications) or IEEE Standard 1233 (for system requirements specifications). Starting with a template saves time and ensures that the requirements specification is complete. Of course, the template can be modified as necessary to meet the needs of the project.
The system requirements specification should fully specify the system to be developed and should include the following information:
System boundary with interfacing systems clearly identified General system description, including capabilities, modes, and users, as applicable External interface requirements for interfacing systems and people Functional requirements and associated performance requirements Environmental requirements Life-cycle process requirements supporting development, qualification (e. g., test, verification, validation, and acceptance), production, deployment, transition, operations and maintenance, change and upgrade, and retirement/replacement, as applicable Reliability and availability Expandability Staffing, human factors, safety, and security requirements; and Physical constraints (such as weight and form factors).
As you read through this list, you may recognize that some of this information has already been collected and documented in previous steps, and there is no need to recreate it here. Refer back to the Concept of Operations that already contains a description of the system boundary, the system itself, and other items in this list.
A System Verification Plan, describing the approach for verifying each and every system requirement, and a System Acceptance Plan, describing the capabilities that must function successfully for customer acceptance, should be created, reviewed, and approved.
4.4.4 Examples.
The Oregon DOT TripCheck project developed a User Functional Requirements Specification, which lists the user requirements for the redesigned TripCheck website. The excerpt from this document in.
Table 8 shows several user requirements for the website autorouting function. As shown, every requirement is prioritized on a scale from 1 ("must have") to 4 ("don't implement") and is related to different types of end users – Commuters (C), Inter-City Travelers (ICT), Tourist Travelers (TT), ADA Travelers (ADA), and Commercial Truckers (CT). These prioritized user requirements were used by the contractor to support Use Case modeling and to define system requirements.
Note that stakeholder requirements that are collected through the requirements elicitation process are likely to have a few imperfections. The key is to document the stakeholder requirements, make them as clear and succinct as possible, prioritize them, and then use them to develop more formally stated system requirements.
The Maryland CHART II system is a statewide traffic management system that has been operational since 2001. The CHART program maintains a website that provides all of the CHART documentation at chart. state. md. us, including a comprehensive system requirements document. A few of the system requirements for the equipment inventory and report generation functions are shown in Table 9.
3.1.3 Equipment Inventory.
The equipment inventory is a list of SHA equipment used in connection with CHART response to incidents. The system provides functions to maintain the inventory, equipment status, and to generate alerts for delinquent equipment.
3.1.3.1 The system shall provide the capability to maintain the equipment inventory.
3.1.3.1.1 The system shall support the addition of new equipment entries to the inventory.
3.1.3.1.2 The system shall support the modification of existing equipment inventory entries.
3.1.3.1.3 The system shall support the deletion of equipment inventory entries.
3.1.3.1.4 The system shall support the allocation of equipment to events.
3.1.4 Report Generation.
This section lists requirements for the generation of reports from the CHART system and archive data.
3.1.4.1 The system shall provide the capability to generate reports from online and archived data.
3.1.4.2 The system shall support the generation of operational reports.
3.1.4.2.1 The system shall support the generation of a Center Situation report.
3.1.4.2.2 The system shall support the generation of a Disable Vehicle event report.
3.1.4.2.3 The system shall support the generation of an Incident event report.
3.1.4.2.4 The system shall support the generation of traffic volume reports.
Table 10 is a typical traceability matrix that would be maintained and populated throughout the project development process. The matrix may be maintained directly in a database or spreadsheet for small projects or generated and maintained with a requirements management tool for more complex projects. Using either approach, the matrix provides backwards and forwards traceability between stakeholder needs (and other potential requirements sources), system requirements, design, implementation, and verification test cases. As shown, only the unique identifiers (e. g., UN1.1) are actually included in the traceability matrix so you don't have to keep many instances of the actual text up-to-date. Note also that the design and implementation columns would not actually be completed until later in the process.
4.5 System Design.
In this step: A system design is created based on the system requirements including a high-level design that defines the overall framework for the system. Subsystems of the system are identified and decomposed further into components. Requirements are allocated to the system components, and interfaces are specified in detail. Detailed specifications are created for the hardware and software components to be developed, and final product selections are made for off-the-shelf components.
Produce a high-level design that meets the system requirements and defines key interfaces, and that facilitates development, integration, and future maintenance and upgrades Develop detailed design specifications that support hardware and software development and procurement of off-the-shelf equipment.
Sources of Information.
Concept of Operations System Requirements document Off-the-shelf products Existing system design documentation ITS standards Other industry standards.
Evaluate off-the-shelf components Develop and evaluate alternative high-level designs Analyze and allocate requirements Document interfaces and identify standards Create Integration Plan, Subsystem Verification Plans, and Subsystem Acceptance Plans Develop detailed component-level design specifications.
Off-the-shelf evaluation and alternatives summary reports High-level (architectural) design Detailed design specifications for hardware/software Integration Plans, Subsystem Verification Plans, Subsystem Acceptance Plans, and Unit/Device Test Plans.
Proceed only if you have:
Approved high-level design for the project Defined all system interfaces Traced the system design specifications to the requirements Approved detailed specifications for all hardware/software components.
4.5.1 Overview.
In the systems engineering approach, we define the problem before we define the solution. The previous steps in the "V" have all focused primarily on defining the problem to be solved. The system design step is the first step where we focus on the solution. This is an important transitional step that links the system requirements that were defined in the previous step with system implementation that will be performed in the next step, as shown in Figure 18.
Figure 18: System Design is the Bridge from Requirements to Implementation.
There are two levels of design that should be included in your project design activities:
High-level design is commonly referred to as architectural design in most systems engineering handbooks and process standards. Architectural design is used because an overall structure for the project is defined in this step. IEEE 610 16 defines architectural design as " the process of defining a collection of hardware and software components and their interfaces to establish the framework for the development of a computer system ". Of course, ITS projects may include several computer systems, a communications network, distributed devices, facilities, and people. High-level design defines a framework for all of these project components.
Detailed design is the complete specification of the software, hardware, and communications components, defining how the components will be developed to meet the system requirements. The software specifications are described in enough detail that the software team can write the individual software modules. The hardware specifications are detailed enough that the hardware components can be fabricated or purchased.
Many consider design to be the most creative part of project development. Two different designs might both meet the system requirements, but one could be far superior in how efficiently it can be developed, integrated, maintained, and upgraded over time. Perhaps the most significant contributor to a successful design is previous design experience with similar systems. The latest car designs all build on 100 years of accumulated automotive design experience. Similarly, the design of a new transportation management system should build on existing successful transportation management system designs. In both cases, the system designer builds on knowledge of what worked before and, perhaps even more importantly, what did not.
It is extremely rare to find an ITS system that is truly "unprecedented", so many if not most system designs should be able to build on existing design information. This is particularly true for projects that are extending an existing system that already includes a well - documented design. In this case, the high-level design will change only to the degree that new functionality or interfaces are added. Similarly, much of the detailed design can be reused for projects that extend the coverage of an existing system.
4.5.2 Key Activities.
System design is a cooperative effort that is performed by systems engineers and the implementation experts who will actually build the system. The process works best when there is a close working relationship among the customer, the systems engineers (e. g., a consultant or in-house systems engineering staff), and the implementation team (e. g., a contractor or in-house team).
High-level design is normally led by systems engineers with participation from the implementation experts to ensure that the design is implementable. Typical activities of high-level design are shown in Figure 19. Each of the activities are actually performed iteratively as high-level design alternatives are defined and evaluated.
Figure 19: High-Level Design Activities.
Evaluate off-the-shelf components - One key aspect of high-level design is the identification of components that will be purchased, reused, or developed from scratch. The project may be required to use off-the-shelf hardware or software, or this may simply be the preferred solution. Specific design constraints may also require that a particular product be used. For example, a municipality that is expanding a signal control system that already includes 300 Type 170 controllers may constrain the design of the expansion to use the same controllers to facilitate operation and maintenance of the overall system. State DOTs and other large agencies often publish approved products lists that identify ITS-related products that meet agency specifications.
When off-the-shelf components will be used, the high-level design must be consistent with the capabilities of the target products. The designer should have an eye on the available products as the high-level design is produced to avoid specifying a design that can be supported only by a custom solution. A particular product should not be specified in the high-level design unless it is truly required. When possible, the high-level design should be vendor and technology independent so that new products and technologies can be inserted over time.
You should give off-the-shelf hardware and software serious consideration and use it where it makes sense. The potential benefits of off-the-shelf solutions – reduced acquisition time and cost, and increased reliability – should be weighed against the requirements that may not be satisfied by the off-the-shelf solution and potential loss of flexibility. If you have requirements that preclude off-the-shelf solutions, determine how important they are and what their real cost will be. This make/buy evaluation should be documented in a summary report that considers the costs and benefits of off-the-shelf and custom solution alternatives over the system life cycle. This report should be a key deliverable of the project.
Also recognize that there is a large grey area between off-the-shelf and custom software for ITS applications. Every qualified software developer starts with an established code base when creating the next "custom solution", accruing some of the benefits of off-the-shelf solutions. Many vendors of off-the-shelf solutions offer customization services, further blurring the distinction between off-the-shelf and custom software.
The FHWA report The Road to Successful ITS Software Acquisition includes a good discussion of software make/buy decision factors and a lot of other good information on software acquisition for ITS. The executive summary for the report is available at itsdocs. fhwa. dot. gov/jpodocs/repts_te/36s01!.pdf.
Figure 20: Electronic Toll Collection Subsystems and Components (Excerpt)
There are many different ways that a system can be partitioned into subsystems and components. In this Electronic Toll Collection example, we might consider whether the Clearinghouse Processing subsystem should be handled by a single centralized facility or distributed to several regional facilities. As another example, vehicle detectors could be included in the Video Enforcement subsystem or in the Tag Reader subsystem, or both.
Even a relatively simple traffic signal system has high-level design choices. For example, a traffic signal system high-level design can be two-level (central computer and local controllers), three-level (central computer, field masters, and local controllers), or a hybrid design that could support either two or three levels. High-level design alternatives like these can have a significant impact on the performance, reliability, and life-cycle costs of the system. Alternative high-level designs should be developed and compared with respect to defined selection criteria to identify the superior design.
The selection criteria that are used to compare the high-level design alternatives include consistency with existing physical and institutional boundaries; ease of development, integration, and upgrading; and management visibility and oversight requirements. One of the most important factors is to keep the interfaces as simple, standard, and foolproof as possible. The selection criteria should be documented along with the analysis that identifies the superior high-level design alternative that will be used. If there are several viable alternatives, they should be reviewed by the project sponsor and other stakeholders.
The Rule/Policy requires the systems engineering analysis for ITS projects to include an analysis of alternative system configurations.
The detailed functional requirements and associated performance requirements are allocated to the system components. To support allocation, the relationships between the required system functions are analyzed in detail. Once you understand the relationships between functions, you can make sure that functions that have a lot of complex and/or time-constrained interactions are allocated to the same component as much as possible. Through this process, each component is made as independent of the other components as possible.
You would not want to develop a high-level design and requirements allocation for a complex ITS project without software tools. Fortunately, there are many good tools that support both requirements analysis and architectural design. The INCOSE tools database, available to nonmembers free of charge at incose, includes a broad range of systems engineering tools and a detailed survey of tools that support requirements management and system architecture.
This is the place to identify ITS standards and any other industry standards that will be used in detail. There are a variety of standards that should be considered at this point. Take a look at all interfaces, both external and internal. Since your regional ITS architecture and/or project ITS architecture was based on the National ITS Architecture, many of the interfaces probably already have a set of ITS standards you should consider. You should also identify standards that are used in your region or state, and also in adjoining states if your project is a multistate deployment. A methodical assessment should be made for each interface to determine which standards are relevant, which standards should be deployed, and perhaps which standards should be phased in over time as part of a longer-range plan.
Once you have taken a look at the relevant standards, beginning with your system's external interfaces, document the nature of the data, formats, ranges of values, and periodicity of the information exchanged on the interface. Then proceed to each of the internal interfaces and document the same information for those.
Agencies are encouraged to incorporate the ITS standards into new systems and upgrades of existing systems. The Rule/Policy requires the systems engineering analysis for ITS projects to include an identification of ITS standards. Consult the ITS Standards Program website at standards. its. dot. gov/ for more information and available resources supporting standards implementation.
Hardware and software specialists create the detailed design for each component identified in the high-level design. Systems engineers play a supporting role, providing technical oversight on an ongoing basis. As you might expect, the detailed design activity will vary for off-the-shelf and custom components, as shown in Figure 21.
Figure 21: Detailed Design Activities.
Prototype user interface - If a user interface is to be developed, a simple user interface prototype is an efficient way to design it.
A prototype is a quick, easy-to-build approxi-mation of a system or part of a system. A software prototype can be used to quickly implement almost any part of a system that you want to explore, but it is used most often to make a quick approximation of a user interface for a new system.
A user interface prototype should be employed to help the user and developer visualize the interface before significant resources are invested in software development. This is one area in particular where you can expect multiple iterations as the developers incrementally create and refine the user interface design based on user feedback. (You will find that it is often easier to get users to provide feedback on a prototype than on system requirements and design specifications, which can be tedious to review.)
While the user interface prototype is included here because it is an effective way to design the user interface, prototypes may actually be generated much earlier in the process, during system requirements development. The prototype can turn the requirements statements into something tangible that users can react to and comment on.
Figure 22: Architectural Design within a System Component.
The detailed design of each component should be reviewed to verify that it meets the allocated requirements and is fit for the intended purpose. Periodic or as-needed reviews can be held to monitor progress and resolve any design issues. For larger projects, coordination meetings should be held to ensure that concurrent design activities are coordinated to mitigate future integration risks. At the completion of the detailed design step, a broader stakeholder meeting is held to review and approve the detailed design before the implementation team begins to build the solution.
Select off-the-shelf (OTS) products – One of the fundamental principles of systems engineering is to delay technology choices until you have a solid foundation for making the right choice. By waiting until this point in the process, the latest technologies and products can be selected, and these selections can be based on a thorough understanding of the requirements and the overall architecture of the system. The selections can also be made by specialists who are closest to the implementation and are therefore best equipped to make them.
There are two fundamental ways that a product can be selected, depending on your procurement requirements and selected procurement strategy:
A trade study can be performed that compares the alternative products and selects the best product based on selection criteria that are in turn based on the specification. A competitive procurement can be used that allows vendors to propose products that will best meet the specification.
In either case, product selection should be driven by a good performance-based specification of the product.
Specifications can be either performance-based or prescriptive . In a performance-based specification, you specify the functionality and the performance that are required rather than what equipment to use. In a prescriptive specification, you specify exactly the equipment that you want. A performance-based specification for a dynamic message sign would include statements like "The sign shall provide a display of 3 lines of 25 characters per line." A prescriptive specification would be "The Trantastic LED Model XYZ sign shall be used." Performance-based specifications tend to provide the best value because they allow the contractor or vendor maximum flexibility to propose the best solution that meets your needs.
If a trade study is performed, then the functional and performance requirements that are allocated to the product should be used to define product selection criteria. An alternatives analysis document captures the alternatives that were considered and the selection criteria that were used to select the superior product. Existing trade studies, approved product lists, and other resources can be used to facilitate product selection.
The evaluation of OTS products should be reviewed to verify that the evaluation criteria were properly defined and applied fairly and that an appropriate range of products was considered.
4.5.3 Outputs.
There isn't a single "best way" to present the high-level design to stakeholders and developers since different users will have different needs and different viewpoints. Over the years, high-level designs have evolved to include several different interconnected "views" of the system. Each view focuses on a single aspect of the system, which makes the system easier to analyze and understand. The specific views that are presented will vary, but they will typically include a physical view that identifies the system components and their relationships; a functional view that describes the system's behavior; a technical view that identifies the interfaces in detail, including the standards to be used; and an informational view that describes the information that will be managed by the system. As shown in Figure 23, these views are just different ways of looking at the same system.
Figure 23: High-Level Design May Include Several Views.
Other outputs of the high-level design process include Integration Plans, Subsystem Verification Plans, and Subsystem Acceptance Plans that will be used in the integration and verification of the system. (See Section 4.7 for further details.)
This activity results in the design of hardware and software for all system components that will support hardware and software development and off-the-shelf product procurement. Other artifacts of the development process include unit/device verification plans. A record of the technical reviews that were conducted should also be included in the project documentation.
4.5.4 Examples.
The CHART II documentation includes a system architecture document that includes many different views of the CHART II system, such as entity relationship diagrams, Use Case diagrams, and network architecture diagrams. Table 11 is an excerpt from the document that shows the subsystems included in the CHART II software.
In contrast with the CHART II statewide system high-level design, many smaller ITS projects have relatively simple high-level designs, such as the system architecture for the MyBus system depicted in Figure 24. This figure identifies the subsystems and major interfaces in the MyBus system.
Figure 24: Metro Transit MyBus System Architecture.
ITS projects that include significant user interface development should prototype the user interface to help users visualize the software that will be developed before significant resources are committed. The objective is to develop a prototype that demonstrates the software look and feel with the least amount of work possible. The simplest prototypes are a series of static images in paper form. For example, when ODOT redesigned its TripCheck website, the implementation team developed a series of "wireframe" diagrams that showed the proposed interface design with enough detail to gather user feedback. One of the 40 wireframe diagrams that was included in the design package is shown in Figure 25.
Figure 25: User Interface Prototype Example: ODOT TripCheck Wireframe Diagram.
There are many ways to document software detailed design. Most commonly, it is portrayed using object-oriented techniques and the Unified Modeling Language 17 , but any technique that the implementation team selects is fine as long as it is detailed enough to support software construction and clear enough to support peer reviews and walkthroughs.
Table 12 is an example of a detailed design for part of the Shadow software that works behind the scenes to keep the traffic information on the ODOT TripCheck website up to date. Note that the interface is defined and that loosely structured program design language (PDL) is used to define the algorithm that is used to process transactions. If much of this appears to be gibberish to you, you are not alone. This is why many agencies use software specialists to provide an independent review of the detailed software development artifacts for higher risk software projects on their behalf.
4.6 Software/Hardware Development and Testing.
In this step: Hardware and software solutions are created for the components identified in the system design. Part of the solution may require custom hardware and/or software development, and part may be implemented with off-the-shelf items, modified as needed to meet the design specifications. The components are tested and delivered ready for integration and installation.
Develop and/or purchase hardware and software components that meet the design specifications and requirements with minimum defects Identify any exceptions to the requirements or design specifications that are required.
Sources of Information.
System and subsystem requirements System design Off-the-shelf products Industry standards Unit/Device Test Plans.
Plan software/hardware development Establish development environment Procure off-the-shelf products Develop software and hardware Perform unit/device testing.
Software/hardware development plans Hardware and software components, tested and ready for integration Supporting documentation (e. g., training materials, user manuals, maintenance manuals, installation and test utilities)
Proceed only if you have:
Conducted technical reviews of the hardware/software Performed configuration/quality checks on the hardware and software Received all supporting documentation Verified that unit/device testing has been successfully completed.
4.6.1 Overview.
Although hardware and software development may be the first task that comes to mind when thinking about an ITS project, the systems engineering approach focuses on the preceding requirements and design steps and on the integration, verification, and validation steps to follow.
This is where the investment in a clear set of requirements and a good system design should begin to pay dividends. The systems engineering process now provides technical oversight as an implementation team of specialists fabricates the hardware and writes the software. This is a highly iterative process, particularly for software, where key features may be incrementally implemented, tested, and incorporated into the baseline over time. Progress is monitored through a planned series of walkthroughs, inspections, and reviews, as shown in Figure 26.
Figure 26: Monitoring Software/Hardware Development.
Although the systems engineering approach does not specify the mechanics of hardware and software development (this is left to the implementation team), the development effort is obviously critical to project success. This is the time to build quality into the hardware/software and to minimize defects. A common refrain in the software industry is that you can't test quality into the software – you must build it in from the beginning. The systems engineering activities that are suggested in this chapter are intended to ensure that the implementation team builds quality into their products.
In practice, most of the hardware that is used for ITS projects is purchased off the shelf. Software development is more prevalent, but many ITS projects include little or no software development. ITS projects that do not include custom hardware or software development acquire the necessary off-the-shelf hardware and software components at this step. Detailed specifications created as part of the detailed design step described in Section 4.5 are used to support the acquisition. The system components are acquired, and bench testing is performed to verify that they meet their specifications. In such cases, the detailed hardware/software development and unit testing described in this chapter are not required.
Custom software development for ITS projects has proven to be a relatively risky endeavor. This is why software development receives more attention than hardware development in this chapter. It is beyond the scope of this document to discuss specific software development techniques, but there are several clear factors that contribute to software development success:
No matter how clear and unambiguous the requirements appear, it is almost certain that the software customer and the software implementation team will interpret some of the requirements differently. Requirements walkthroughs that are described in Section 4.4.2 help to mitigate this risk, but ultimately the customer/stakeholders will have to monitor the software as it is being developed to ensure that the development is proceeding in the right direction. Expect and plan for course corrections and requirements changes along the way, at least until we discover the way to build the "perfect specification". Ensure that the contract is flexible enough to have a couple of reviews and that it allows some visits or informal reviews with the developers to see how they are doing. This might be one of the project risks to include in your risk management plan. (See Section 5.3 for more information on risk management.)
4.6.2 Key Activities.
The hardware and software specialists implement and test each system component. Systems engineers play a supporting role, providing technical oversight on an ongoing basis to identify minor issues early, before they grow into large problems. The process works best when there is a close working relationship among the customer, the systems engineers (e. g., a consultant or in-house systems engineering staff), and the implementation team (e. g., a contractor or an in-house team). Each of the activity descriptions is followed by a discussion of the technical review and monitoring of that activity.
Plan software/hardware development – The implementation team documents its development process, best practices, and conventions that will be used. The Software/Hardware Development plan should address development methods, documentation requirements, delivery stages, configuration control procedures, technical tracking and control processes, and the review process. The plan(s) should be reviewed by the customer and the broader project team.
The Software/Hardware Development plan should be reviewed and approved before development begins. Well-qualified implementation teams will already have proven processes in place that can be tailored for the specific project, so this shouldn't be viewed as a burdensome activity. The intent is not to mandate a particular implementation process but to ensure that the implementation team has an established process that they will follow. An implementation team that doesn't have a documented process is a red flag.
Although it is sometimes overlooked, the development environment is just as critical to future software maintenance as the actual detailed design documentation and source code. Every tool that is used to develop and test the software should be documented, including version information and complete documentation of any customization or extensions. If this is a custom development and you have paid for the tools, include the development environment as a project deliverable.
A peer review or inspection can be used to verify that the development environment is adequate and accurately documented. Once established, the development environment should be placed under configuration management (discussed in Section 5.4) so that changes to the environment are tracked. Seemingly minor changes like application library upgrades or operating system service pack upgrades can cause problems later if they are not controlled and tracked.
Procure off-the-shelf products – Off-the-shelf products are procured based on the product specifications developed in the detailed design step (see Section 4.5).
Delay procurement until the products are actually required to support the implementation. Too much lead time can result in hardware or software that becomes outdated before it can be integrated into the project. Too little lead time could cause procurement delays that impact the project schedule.
Develop software and hardware – The software is written and the hardware is built based on the detailed design. The current state of the practice is to develop the software incrementally and release it in stages. The initial releases implement a few core features, and subsequent releases add more features until all requirements are satisfied. For example, a TMC project might first implement a basic dynamic message sign capability and demonstrate its ability to post messages to the sign and to monitor sign status. Then, more advanced message scheduling and message library management functions could be implemented. This incremental approach enables early and ongoing feedback between the customer and the implementation team. If this approach is used, then a staged delivery plan, which defines the order in which the software will be developed and the staged release process, should be included in the Software Development Plan.
Releases will be developed, tested, and made available to selected users for feedback. Providing feedback on interim releases is only part of the technical oversight that should be performed. Code inspections and code walkthroughs should also be used to check the software quality; these are the only ways to ensure that the software is well structured, well documented, and consistently follows the coding standards and conventions. Independent reviewers with software expertise should be used to help verify software quality on the customer's behalf if the customer agency does not have the right expertise.
Most project managers who have managed software development efforts are familiar with the "90% complete" syndrome, in which software developers quickly reach "90% complete" status but the development effort then languishes as the final 10% takes much more work than anticipated. Project tracking should be based on discrete, measurable milestones instead of arbitrary "% complete" estimates from the software developers. For example, instead of tracking the developer's estimated "% complete", set up a monitoring system that gives credit for completed software only when the piece of code has been successfully tested and integrated into the next release.
Like the end-product hardware and software components, the supporting products can also be developed in stages and released incrementally to encourage early customer feedback.
While the developers will conduct their own tests to identify and fix as many defects as possible, experience shows that the test cases and formal tests should be conducted by an independent party, either within the implementation team or from another organization. The reason for this independence is obvious if you look at the objectives of the software developer and the software tester. The primary objective for the tester is to break the software while the primary objective of the developer is the exact opposite – to make the software work. Few individuals can effectively wear both of these hats. The degree of independence between the developer and the tester (i. e., different people in the same department, different departments, or different companies) and the level of formality in unit testing should be commensurate with the criticality of the software and the size of the project.
The unit verification plan should be reviewed to confirm that it will thoroughly test the hardware/software unit. The traceability matrix should be updated to identify the components, test cases, and test status. The testing should be tracked as it progresses to verify that defects are being identified and addressed properly. A testing process that identifies few defects could indicate excellent software or an incomplete or faulty testing process. Use scheduled technical reviews to understand the real project status. You can monitor the rate at which defects are being discovered to estimate the number of remaining defects and make an educated decision about when the hardware/software will be ready for release.
4.6.3 Outputs.
This step results in hardware and software components that are tested and ready for integration and verification. Artifacts of the development process are also delivered, including the Software/Hardware Development Plans, development environment documentation, unit test results, change control records, and supporting products and documentation. A record of the technical reviews that were conducted should also be included in the project documentation.
4.7 Integration and Verification.
In this step: The software and hardware components are individually verified and then integrated to produce higher-level assemblies or subsystems. These assemblies are also individually verified before being integrated with others to produce yet larger assemblies, until the complete system has been integrated and verified.
Integrate and verify the system in accordance with the high-level design, requirements, and verification plans and procedures Confirm that all interfaces have been correctly implemented Confirm that all requirements and constraints have been satisfied.
Sources of Information.
System Requirements document High-level design specifications Detailed design specifications Hardware and software components Integration plan System and Subsystem Verification Plans Subsystem Acceptance Plans.
Add detail to integration and verification plans Establish integration and verification environment Perform integration Perform verification.
Integration plan (updated) Verification plan (updated) Integration test and analysis results Verification results, including corrective actions taken.
Proceed only if you have:
Documented evidence that the components, subsystems, and system meet the allocated requirements Documented evidence that the external and internal interfaces are working and consistent with the interface specifications.
4.7.1 Overview.
In this step, we assemble the system components into a working system and verify that it fulfills all of its requirements. Assembling a puzzle is a nice, simple analogy for this step, but the challenge in an ITS project "puzzle" is that you may find that not all of the pieces are available at the same time, some won't fit together particularly well at first, and there will be pressure to change some of the pieces after you have already assembled them. The systems engineering approach provides a systematic process for integration and verification that addresses the challenges and complexity of assembling an ITS system.
Integration and verification are iterative processes in which the software and hardware components that make up the system are progressively combined into subsystems and verified against the requirements, as shown in Figure 27. This process continues until the entire system is integrated and verified against all of its requirements. This is the opposite of the decomposition that was performed during the Requirements and Design steps, which is reflected in the symmetry between the left and right sides of the "V". Components that are identified and defined on the left side of the "V" are integrated and verified on the right.
Figure 27: Iterative Integration and Verification.
In systems engineering, we draw a distinction between verification and validation. Verification confirms that a product meets its specified requirements. Validation confirms that the product fulfills its intended use. In other words, verification ensures that you "built the product right", whereas validation ensures that you "built the right product". This is an important distinction because there are lots of examples of well-engineered products that met all of their requirements but ultimately failed to serve their intended purpose. For example, a bus rapid transit system might implement a signal priority capability that satisfies all of its requirements. This system might not serve its intended purpose if the traffic network is chronically congested and the buses are never actually granted priority by the signal control system when they need it most. Verification is discussed in this section; system validation is described in Section 4.9.
Integrating and verifying the system are key systems engineering activities. The software and hardware specialists who led the previous step are also involved and provide technical support as their components are integrated into the broader system. Stakeholders should also be materially involved in verification, particularly in the system verification activities. As the verification proceeds from detailed component verification to end-to-end system verification, the implementation team becomes less involved and the stakeholders become more involved. The systems engineering activity provides continuity to the process.
4.7.2 Key Activities.
Integrating and verifying the system include basic planning, preparation, and execution steps, described as follows:
Add detail to the integration and verification plans – Recall that integration and verification planning actually began on the left side of the "V". A technique for verifying every requirement was identified as the requirements were specified, and a plan for verifying each requirement was documented. As the system design was defined, the plan for integrating the system components was developed. Detail was added to the general plan when the system was implemented, and the order in which project components and other required resources would be available was defined. The connections between the requirements, system components, and verification techniques were documented in a traceability matrix that was updated as the project development progressed.
The integration plan defines the order in which the project components are integrated with each other and with other systems. Each integration step includes tests that verify the functionality of the integrated assembly, with particular focus on the interfaces. For less complex projects, the integration plan can be informal. For complex projects, there will have to be careful planning so that the system is integrated in efficient, useful increments consistent with the master schedule.
The verification plan is expanded into procedures that define the step-by-step process that will be used to verify each component, subsystem, and system against its requirements. For efficiency, test cases are identified that can be used to verify multiple requirements. Each test case includes a series of steps that will be performed, the expected outputs, and the requirements that will be verified by each step in the test case.
The systems engineering analysis requirements identified in FHWA Rule 940.11/FTA Policy Section VI include "identification of . testing procedures", which are the same as the verification procedures that are described here.
Every round of verification that is performed as the system is integrated should be thorough so that defects are identified as early and at as low a level as possible. It is much easier to isolate a defect during component-level verification than it is during system verification, when the entire system is assembled and many different components could be contributing to the problem. To put it in plain language, it is much easier to find the needle before you have assembled a haystack around it.
If test and simulation tools are used to support system verification, then these tools should first be verified with the same care as the system. Verifying a system using a simulator that has not been verified could result in invalid results or compensating errors in which a defect in the end product is masked by a defect in the verification tool.
There are four basic techniques that are used to verify each requirement:
Test : Direct measurement of system operation. Defined inputs are provided and outputs are measured to verify that the requirements have been met. Typically, a test includes some level of instrumentation. Tests are more prevalent during early verification, when component-level capabilities are being exercised and verified. Demonstration : Witnessing system operation in the expected or simulated environment without need for measurement data. For example, a requirement that an alarm is issued under certain conditions could be verified through demonstration. Demonstrations are more prevalent in system-level verification when the complete system is available to demonstrate end-to-end operational capabilities. Inspection : Direct observation of requirements such as construction features, workmanship, dimensions and other physical characteristics, and software language. Analysis : Verification using logical, mathematical, and/or graphical techniques. Analysis is frequently used when verification by test would not be feasible or would be prohibitively expensive. For example, a requirement that a website support up to 1,000 simultaneous users would normally be verified through analysis.
As each test case is performed, all actions and system responses are recorded. Unexpected responses are documented and analyzed to determine the cause and to define a plan of action, which might involve repeating the test, revising the test case, fixing the system, or even changing the requirement. Any changes to the test cases, the requirements, or the system are managed through the configuration management process.
It is important to keep strict configuration control over the system components and documentation as you proceed through verification. The configuration of each component and the test-case version should be verified and duly noted as part of the verification results. It is human nature to want to find and fix a problem "on the spot", but it is very easy to lose configuration control when you jump in to make a quick fix. (See Section 5.4 for more information about configuration management.)
As verification proceeds, you normally will have to retest each portion of the system more than once. For example, a new software release that adds new capabilities or fixes previously identified defects may be produced. It is important not only to verify the new features or bug fixes when verifying the new release but also to do regression testing to verify that the portion of the software that used to work still does. Regression tests are important because experience shows that old defects may reappear in later releases or that a fix to one part of the software may break another part. For large projects, automated testing tools can be used to automatically run a suite of regression tests to fully test each new software release.
Resist the temptation to scale back verification activities due to budget or schedule constraints. This would be false economizing because defects that slip through will be even more expensive to fix later in the system life cycle. As previously noted, it is most efficient to identify defects early in the verification process. This approach also minimizes the number of issues that will be identified during system verification, which is the most formal and most scrutinized verification step. Issues that occur during a formal system verification that is witnessed by stakeholders can undermine confidence in the system. Be sure to run the system verification test cases beforehand to the extent possible to reduce the risk of unexpected issues during formal system verification.
4.7.3 Outputs.
Integration and verification result in a documentation trail showing the activities that were performed and their results. The outputs include:
Integration plan (updated) - This plan defines the sequence of steps that were performed to integrate the system. It also defines the integration tests that were performed to test the interfaces in detail and generally test the functionality of the assembly. Verification plan (updated) and procedures - This plan documents the approach that was used to verify each of the system and subsystem requirements. The plan identifies test cases that were used to verify each requirement and general processes that were used to conduct the test cases and deal with verification issues. Verification procedures elaborate each test case and specify the step-by-step actions and expected responses. The traceability matrix ties the requirements to the design components and the test cases. Integration test and analysis results - This is a record of the integration tests that were actually conducted, including analysis and disposition of any identified anomalies. Verification results – This is a summary of the verification results. It should provide evidence that the system/subsystem/component meets the requirements and identify any corrective actions that were recommended or taken as a result of the verification process.
4.7.4 Examples.
Many verification plans that have been developed for ITS projects are available on the Internet. Although they have many different titles – integration test plans, functional test plans, verification plans – they have similar content. For example, Table 13 is an excerpt from a functional test plan that was used to test the Oregon DOT TripCheck website. The script in the table lists each action that the tester should take and the expected result from the system in a step-by-step procedure that tests links in a website navigation panel.
Table 14 is a verification procedure from a Maryland Chart II Integration Test Plan that includes a bit more background for each test case in a slightly different format.
Reports are generated that document the actual results of the verification tests that were performed. Table 15 is a brief excerpt from a test result report for the desktop application that is used by ODOT to update data on the TripCheck website. Each row in the table summarizes the results for each test case. This excerpt was selected because it includes one of the few test cases in this report in which the actual results did not match the expected results. Note that in Test 2, an error occurred that exposed a software defect that had to be fixed. Identification of defects like this before the system is operational is one of the key benefits of a thorough verification process.
4.8 Initial Deployment.
In this step: The system is installed in the operational environment and transferred from the project development team to the organization that will own and operate it. The transfer also includes support equipment, documentation, operator training, and other enabling products that support ongoing system operation and maintenance. Acceptance tests are conducted to confirm that the system performs as intended in the operational environment. A transition period and warranty ease the transition to full system operation.
Uneventful transition to the new system.
Sources of Information.
Integrated and verified system, ready for installation System Acceptance Plan.
Plan for system installation and transition Deliver the system Prepare the facility Install the system Perform acceptance tests Transition to operation.
Hardware and software inventory Final documentation and training materials Delivery and installation plan, including shipping notices Transition Plan with checklists Test issues and resolutions Operations and maintenance plan and procedures.
Proceed only if you have:
Formally accepted the system Documented acceptance test results, anomalies, and recommendations.
4.8.1 Overview.
Up to this point, the system has been tested primarily in a lab environment. The next step is to ship the system to the actual deployment site(s), install and check it out, and make sure the system and personnel are ready to transition to system operations and maintenance (O&M), as shown in Figure 28.
Figure 28: Transition from Development Team to Operations and Maintenance Team.
Larger systems may be installed in stages. For example, a closed-circuit television (CCTV) camera network may be built out incrementally over the course of several years and several projects. This may be done to spread the costs across several fiscal years or to synchronize with other construction projects in the region. In other cases, phased deployment may be performed to mitigate risk by deploying the essential core of the system and then adding features over time. If it is necessary to deploy the system in stages, whether due to funding constraints, to mitigate risk, or to synchronize with other projects, it is important to understand the dependencies between successive deployments and to prioritize the projects accordingly.
4.8.2 Key Activities.
The following tasks are cooperatively performed to deliver, install, and transition the system to full operational status:
Plan for system installation and transition - This step represents the handoff of the tested system from the project team to the O&M team in the field. The deployment sites must be prepared, the system must be delivered and installed at each site and tested, and O&M staff must be trained. All of this is documented in a System Delivery and Installation Plan. If the new system is replacing an existing system, a smooth transition will be planned and documented in a Transition Plan, including a backup strategy to revert to the existing system just in case the new system does not operate as intended. Each of these plans is further detailed below.
The deployment strategy should take into consideration the complexity of the system, whether it will be deployed at multiple sites, and, if so, the order of the deployments. It might be a good idea to bring up a minimal configuration or a single installation at first and to add further functionality and other sites once the initial installation is operational.
Until delivery, the system's components – the hardware and software – have been inventoried and under version control by the engineering team. Once delivered, however, ownership may change hands to the agency that will operate and maintain the system. The engineering and operating agencies should come to agreement ahead of time regarding who will maintain the inventory, the version of the software and hardware, any vendor maintenance agreements, and maintenance records to facilitate system delivery.
When the system is delivered, the O&M team should perform an initial inspection and preliminarily accept the system. This might be a formal review of the hardware/software inventory, a check of the documentation, or perhaps a start-up test. More extensive formal acceptance tests will be conducted once the system is fully installed.
The first step is to create the Transition Plan, which clearly defines how the system will be transitioned to operational status. This plan should include the validation criteria; that is, how are you going to know that the system is performing correctly once it is operational? It is a good idea to include a series of checklists in the Transition Plan that identify all key pieces that must be in place and working prior to switching over to full operation. If there are still open issues found during system testing (and there likely will be), evaluate each of them to determine whether or not they should be fixed or a work-around created prior to placing the system into full operation. A formal review of the Transition Plan should be held with the implementation team, the operations team, and other key personnel.
When transitioning to operation, especially when replacing an existing system, a contingency back out plan should be included as part of the Transition Plan so that, in the event that the new system does not operate correctly, you can revert to the older system until the issues have been sorted out.
All operations and maintenance staff should be in place and properly trained. The maintenance plans for the system should be reviewed by the O&M team; check to make sure that all maintenance procedures and hardware/software maintenance records are in place and adequate to properly maintain the system.
The operational procedures and any special equipment needed to operate or monitor the system should be ready, tested, and operating correctly. It's a good idea to take some performance measurements on the system at this stage so that you can estimate performance following transition to full operational status. Establish user accounts, initialize databases or files as identified in the Transition Plan, and make sure that all test data has been removed or erased. The system should be set to begin operations.
Some transitions to full operation can be complex, especially when an existing system that many people use is being replaced. Just as we get annoyed when we can't access the Internet for a few hours, users may also become irritated if the system is down for any period of time. You might want to consider planning the transition on a weekend or in the evening if possible to cause the least disruption to system users. Also consider holding a "dry run" so that everyone knows their role during the transition period and performs their assigned task to make the transition as smooth as possible.
Finally, a transition readiness review meeting should be held with the O&M team, the support personnel who are on hand to address last-minute issues, representatives from other interfacing systems, the project sponsor, and other key personnel. Use the checklist in the transition plan to assess system readiness. Only after all checklist items have been declared as ready should the go-ahead be given for the system to transition to full operational status.
Following transition, the team will quickly ramp down to include only the O&M personnel. It might be advisable to keep a few system support personnel around through the validation period so that any issues that arise in the early stages are resolved quickly.
4.8.3 Outputs.
The primary output of this step is a fully installed system (in a facility or site modified to meet the system requirements) that has been transitioned to operational status. To support this effort, the following outputs should be generated:
A hardware and software inventory, under configuration control, that includes versioning information, maintenance records and plans, and other property management information Final documentation and training materials Delivery and installation plan, including shipping notices Updated test plan and procedures Transition Plan with checklists Test issues and resolutions, and Operations and Maintenance Plan and procedures.
4.8.4 Examples.
Deployment plans and installation plans can be complex documents for ITS projects that involve significant center and/or field equipment installation. Planning for deployment and installation must begin early in the project for such systems. For example, the Sunol Smart Carpool Lane Joint Powers Agency (JPA) developed a deployment plan as part of its Systems Engineering Management Plan during initial planning for the I-680 Smart Lane Project. This plan defines deployment activities (see Figure 29), roles and responsibilities, deployment personnel by position, installation equipment and tools, system documentation, and installation considerations such as safety, code and industry standards, planning requirements, weather accommodations, and shop drawing submittals. More detailed installation plans will be prepared by the system integrator based on this deployment plan.
Figure 29: I-680 Smart Lane Project Deployment Activities Overview.
4.9 System Validation.
In this step: After the ITS system has passed system verification and is installed in the operational environment, the system owner/operator, whether the state DOT, a regional agency, or another entity, runs its own set of tests to make sure that the deployed system meets the original needs identified in the Concept of Operations.
Confirm that the installed system meets the user's needs and is effective in meeting its intended purpose.
Sources of Information.
Concept of Operations Verified, installed, and operational system System Validation Plan.
Update Validation Plan as necessary and develop procedures Validate system Document validation results, including any recommendations or corrective actions.
System Validation Plan (update) and procedures Validation results.
Proceed only if you have:
Validated that the system is effectively meeting its intended purpose Documented issues/shortcomings Established ongoing mechanisms for monitoring performance and collecting recommendations for improvement Made modifications to the Concept of Operations to reflect how the system is actually being used.
4.9.1 Overview.
A few readers may be surprised to see that there is another step in the "V" between initial deployment and operations and maintenance. After all, in the last few chapters we have already verified that the system meets all of its requirements, installed the system and trained the users, and the customer has successfully conducted acceptance tests and formally accepted the system. Aren't we done?
The answer is: yes and no. Yes, the system has been put into operation and is beginning to be used for its intended purpose. No, we aren't done. Now that the system is beginning to be used in the operational environment, we have our first good opportunity to measure just how effective the system is in that environment (i. e., system validation).
Figure 30: Validation Occurs Throughout the Systems Engineering Process.
In systems engineering, we draw a distinction between verification and validation . Verification confirms that a product meets its specified requirements. Validation confirms that the product fulfills its intended use. The majority of system verification can be performed before the system is deployed. Validation really can't be completed until the system is in its operational environment and is being used by the real users. For example, validation of a new signal control system can't really be completed until the new system is in place and we can see how effectively it controls traffic.
Of course, the last thing we want to find is that we've built the wrong system just as it is becoming operational. This is why the systems engineering approach seeks to validate the products that lead up to the final operational system to maximize the chances of a successful system validation at the end of the project. This approach is called in-process validation and is shown in Figure 30. As depicted in the figure, validation was performed on an ongoing basis throughout the process:
The business case for the project was documented and validated by senior decision makers during the initial feasibility study. User needs were documented and validated by the stakeholders (i. e., "Are these the right needs?") during the Concept of Operations development. Stakeholder and system requirements were developed and validated by the stakeholders (i. e., "Do these requirements accurately reflect your needs?"). As the system was designed and the software was created, key aspects of the implementation were validated by the users. Particular emphasis was placed on validating the user interface design since it has a strong influence on user satisfaction.
Since validation was performed along the way, there should be fewer surprises during the final system validation that is discussed in this step. The system will have already been designed to meet the user's expectations, and the user's expectations will have been set to match the delivered system.
4.9.2 Key Activities.
The system validation is the responsibility of the system owner and will typically be performed by the system users.
Update the Validation Plan and develop procedures – An initial Validation Plan was created at the same time as the Concept of Operations earlier in the life cycle (see Section 4.3). The performance measures identified in the Concept of Operations forced early consideration and agreement on how system performance and project success would be measured. A Validation Plan was prepared that defined the consensus validation approach and the outcomes that should be measured.
It is important to think about the desired outcomes and how they will be measured early in the process because some measures may require data collection before the system is operational to support "before and after" estudos. For example, if the desired outcome of the project is an improvement in incident response times, then data must be collected before the system is installed to measure existing response times. This "before" data is then compared with data collected after the system is operational to estimate the impact of the new system. Even with "before" data, determining how much of the difference between "before" and "after" data is actually attributable to the new system is a significant challenge because there are many other factors involved. Without "before" data, validation of these types of performance improvements is impossible.
In addition to objective performance measures, the system validation may also measure how satisfied the users are with the system. This can be assessed directly using surveys, interviews, in-process reviews, and direct observation. Other metrics that are related to system performance and user satisfaction can also be monitored, including defect rates, requests for help, and system reliability. Don't forget the maintenance aspects of the system during validation – it may be helpful to validate that the maintenance group's needs are being met as they maintain the system.
Detailed validation procedures may also be developed that provide step-by-step instructions on how support for specific user needs will be validated. At the other end of the spectrum, the system validation could be a set time period when data collection is performed during normal operations. This is really the system owner's decision – the system validation can be as formal and as structured as desired. The benefit of detailed validation procedures is that the validation will be repeatable and well documented. The drawback is that a carefully scripted sequence may not accurately reflect "intended use" of the system.
The measurement of system performance should not stop after the validation period. Continuing performance measurement will enable you to determine when the system becomes less effective. The desired performance measures should be reflected in the system requirements so that these measures are collected as a part of normal system operation as much as possible. Similarly, the mechanisms that are used to gauge user satisfaction with the system (e. g., surveys) should be used periodically to monitor user satisfaction as familiarity with the system increases and expectations change.
Frequently, the way in which the system is used will evolve during initial system operation. Significant departures from anticipated procedures should also be noted and documented in the Concept of Operations. For example, consider an HOV reversible lane facility that uses system detectors to verify that all vehicles have exited the facility. During system operation, the agency may find that the reliability of system detectors is not as high as anticipated. To compensate, the agency adjusts its operating procedures to perform a physical tour of the facility prior to opening it up in the opposite direction. The agency should amend its ConOps to reflect this new way of operating the HOV facility.
Deficiencies of the project development process should also be reviewed to determine where the process may have fallen down, so that an improved process can be used on the next project. Without worrying about attribution to individuals, determine how a significant deficiency slipped through the process. Were the needs not properly specified? Were requirements incorrectly specified based on the needs? If so, were there opportunities for the stakeholders to walk through the requirements and identify the problem? A "lessons learned" review of the project development process at the conclusion of the system validation can be very valuable.
4.9.3 Outputs.
System validation should result in a document trail that includes an up-to-date Validation Plan; validation procedures (if written); and validation results, including disposition of identified deficiencies. There are several industry and government standard outlines for validation plans, including IEEE Standard 1012 18 , which is intended for software verification and validation but is also applicable to broader system verification and validation. Note that this standard covers both verification and validation plans with a single outline.
4.9.4 Examples.
There are few good examples of system validations that have been performed for ITS projects. Some of the best examples are evaluations that have been performed for field operational tests (FOTs), and other evaluations that have looked in detail at the benefits of ITS. For example, the evaluation of the ORANGES Electronic Payment Systems FOT initially identified system goals and then related them to quantitative and qualitative performance measures, as shown in Table 16. Each of the performance measures was then evaluated, in many cases using before-and-after study techniques, to determine whether the system goals were achieved. Figure 31 shows results supporting the transponder market penetration goal (Goal 2). This evaluation report is a good example of many validation techniques, including the collection of baseline data, before-and-after studies, statistical analysis, evaluation of other causal factors, and interview and survey activities.
Revenue received.
Number of smart card users that newly acquire a transponder.
Average transaction times.
% revenue prepaid.
Procurement, inventory, delivery, commissions for any conventional passes made available on smart cards.
% equipment availability.
Card use profiles Average prepaid balance Modal use profile.
General benefits Ease of use Convenience of revaluing.
Customer feedback.
General benefits Reduced payment disputes Reduced transfer abuse Ease of customer use Maintenance.
Operations/maintenance staff feedback.
General benefits More comprehensive data collection.
Planning/management staff feedback.
General institutional issues Interagency collaboration.
Partnership feedback.
Figure 31: ORANGES Evaluation – Cumulative Transponders Issued.
4.10 Operations and Maintenance.
In this step: Once the customer has accepted the ITS system, the system operates in its typical steady state. System maintenance is routinely performed and performance measures are monitored. As issues, suggested improvements, and technology upgrades are identified, they are documented, considered for addition to the system baseline, and incorporated as funds become available. An abbreviated version of the systems engineering process is used to evaluate and implement each change. This occurs for each change or upgrade until the ITS system reaches the end of its operational life.
Use and maintain the system over the course of its operational life.
Sources of Information.
System requirements (operations/maintenance requirements) Operations and Maintenance Plan and procedures Training materials Performance data Evolving stakeholder needs.
Conduct Operations and Maintenance Plan reviews Establish and maintain all operations and maintenance procedures Provide user support Collect system operational data Change or upgrade the system Maintain configuration control of the system Provide maintenance activity support.
System performance reports Operations logs Maintenance records Updated operations and maintenance procedures Identified defects and recommended enhancements Record of changes and upgrades Budget projections and requests.
Proceed only if you have:
Demonstrated that the system has reached the end of its useful life.
4.10.1 Overview.
Now that the ITS system is up and running, it enters a "steady state" period that lasts until the system is retired or replaced. During this period, operators, maintainers, and users of the system may identify issues, suggest enhancements, or identify potential efficiencies. New releases of hardware and software will be installed and routine maintenance will be performed. Approved changes and upgrades are incorporated into the system baseline using the systems engineering process, as shown in Figure 32. O&M personnel might also identify process changes that may streamline O&M activities. All changes to the processes should be documented.
Figure 32: Changes/Upgrades Performed Using Systems Engineering.
Successful operations and maintenance of the system will lead to customer and user satisfaction; for example, the CCTVs will be online and fully functional at all times; rush-hour drivers will be able to obtain accurate, up-to-the-minute speed, accident, and construction reports before they head out the door; and transit vehicles will arrive on time. This is when the system benefits are realized.
4.10.2 Key Activities.
In most systems, operations and maintenance is where the lion's share of life-cycle costs are incurred. The key activities are performed periodically unless a change is considered severe and affects system performance dramatically.
Conduct Operations and Maintenance Plan reviews – Operations and maintenance roles and required resources are defined in the Concept of Operations (see Section 4.3) and are refined as the system is developed. At this point, operations and maintenance personnel and the system sponsor should all be in agreement on the level of support to be provided with regard to staffing, frequency of technology refreshes (e. g., how often the software or hardware is upgraded to a new release), performance monitoring and reporting, processes for handling identified issues, and level of support provided to the end user. Establish and maintain operations and maintenance procedures – Although the processes to be used for identifying, tracking, resolving, and recording all system issues will have been established during the initial deployment step, specific detailed procedures will be further developed and maintained as efficiencies are identified. All personnel will be trained in the procedures and are responsible for their use. Provide user support – End users of your system, whether they are traffic management center operators or a person whose farecard is not working in the new farecard reader, need to be able to contact someone for user support. This support could be handled by a formal call center or perhaps only a person who performs the task during spare time via e-mail, depending on the type and complexity of the system to be supported. Either way, the user support personnel should be properly trained, should document all calls from initiation through final resolution, and should have access to system experts if needed. These user support personnel should also provide periodic updates on user inquiries and resolutions.
A database that holds information about all user support inquiries can help you to review the types of calls that were received and to notice trends. If there seems to be a recurring problem or confusion about some aspect of the system, it could mean that a system modification should be considered.
All proposed changes should be prioritized and will require careful cost estimates, schedules, planning, testing, and coordination with operations and maintenance prior to installation. Each approved change will require a new system release level and should be coordinated between the O&M and development teams.
Each potential change to the system should be assessed by the affected stakeholders and the project sponsor to determine whether or not it should be incorporated. Before approving the change , you should clearly understand and document the effect that it will have on other parts of the system, on the operation of the system as a whole, and on the maintenance of the system. If you make this assessment early on by following the systems engineering process, you won't discover a problem months later in the lab when the impact on the schedule and budget will be significantly higher.
Changes are approved and managed using the configuration management process defined in Section 5.4. You should use the systems engineering process, from Concept of Operations through design, verification, and installation, to add any approved change to the system. Basically, each change requires another, possibly abbreviated, pass through the "V". Approved changes are typically aggregated into builds or releases, although you may want to introduce particularly complex changes individually.
Each build or release should be subjected to thorough verification testing prior to installation. There are many stories of "changes that affected only a few lines of code" that ultimately resulted in operational failure. It is important to run regression tests that verify that a seemingly minor change in one part of the system didn't have an unexpected effect on another part of the system. Statements like "I didn't change that area so there is no need to test it" should be a red flag.
In many cases, the development and test lab that was available during the initial system development may not be available once the system has been deployed. (It might even be the system that was deployed!) Therefore, it's common to establish a test environment to test software product upgrades or minor fixes without interfering with the current operational system.
This is one area where state of the practice lags a bit in ITS. It is common for agencies to require good configuration management practices during system development but to lose configuration control after the system is delivered. For example, if you want to know the configuration of a field controller at a particular location, you will have to take a trip to the field and have a look inside the cabinet at many agencies.
Consider using a database tool or a similar property management application to help you keep track of all equipment, together with maintenance records, maintenance schedules, and so forth. Check it weekly and schedule the maintenance required.
4.10.3 Outputs.
The current system configuration, including hardware, software, and operational information, must be documented and maintained. A complete record of all system changes should also be documented and readily available. This is especially helpful when trying to duplicate an anomaly identified by a user or operator.
System performance reports should be generated, both from any installed automated performance monitors and from user-support calls received. Trend analysis reports can be generated and reviewed to identify system deficiencies.
Figure 33: Kentucky ITS M&O Plan Best Practices.
4.10.4 Examples.
Operations and Maintenance Plans.
The Kentucky Transportation Center developed a Maintenance and Operations Plan for ITS in Kentucky that provides recommendations for supporting and coordinating ITS maintenance and operations activities throughout the Kentucky Transportation Cabinet. It inventories ITS equipment and systems, identifies national best practices for operations and maintenance (see Figure 33), assesses current maintenance and operations practices in Kentucky, and makes recommendations. Many of the recommendations and best practices identified in the report will be relevant to other agencies. This broad agency-wide plan complements the detailed procedures that are used to operate and maintain individual systems.
Operations and Maintenance Procedures.
Operations and maintenance procedures are detailed and don't make particularly good reading unless you actually operate and maintain one of these systems, in which case they are indispensable. These manuals will be subject to relatively frequent changes as personnel will find errors and new and better ways to operate and maintain the system. A short excerpt from the CHART II O&M Procedures is shown in Figure 34.
Figure 34: CHART II O&M Procedures.
Change and Upgrade Plans.
Metro Transit in Seattle, Washington, upgraded its existing Transit AVL system to support transit traveler information systems as part of the Smart Trek program. To support this upgrade, detailed cost estimates were made based on systems engineering analysis of the AVL enhancements that would be required to support the traveler information objectives of the Smart Trek project. The estimate is shown in Table 17.
4.11 Retirement/Replacement.
In this step: Operation of the ITS system is periodically assessed to determine its efficiency. If the cost to operate and maintain the system exceeds the cost to develop a new ITS system, the existing system becomes a candidate for replacement. A system retirement plan will be generated to retire the existing system gracefully.
Remove the system from operation, gracefully terminating or transitioning its service Dispose of the retired system properly.
Sources of Information.
System requirements (retirement/disposal requirements) Service life of the system and components System performance measures and maintenance records.
Plan system retirement Deactivate system Remove system Dispose of system.
System retirement plan Archival documentation.
Proceed only if you have:
Planned the system retirement Documented lessons learned Disposed of the retired system properly.
4.11.1 Overview.
Systems are retired from service for a variety of reasons. Perhaps the system is being replaced by a newer system, or maybe the Concept of Operations has changed such that stakeholder needs are going to be met in an alternative manner that will no longer require use of the system. For example, the emergency call boxes that currently dot many of the nation's highways are beginning to be retired because their usage has decreased dramatically due to widespread use of cell phones. Many of the first-generation ITS systems are twenty years old and approaching the end of their useful life. Regardless of the reason for the retirement of the system, you should make sure that everything is wrapped up (e. g., hardware and software inventory identified for disposal is audited, final software images are captured, and documentation is archived), the contract is closed properly, and the disposal of the system is planned and executed.
4.11.2 Key Activities.
This step represents the end of the system life cycle – the retirement and disposal of the ITS system. An important characteristic of the systems engineering process is the planning of all events; the retirement of the system should be planned as well.
The retirement plan should include a complete inventory of all software and hardware, final system and documentation configurations, and other information that captures the final operational status of the system. This should include identification of ownership so that owners can be given the option to keep their equipment and use it elsewhere. It should also include how the system and documentation will be disposed of, including an assessment and plan if special security measures should be in place or if there are environmental concerns that might dictate the site of disposal. You should also plan to erase the content of all storage devices to protect any personal data that might pose privacy concerns. The retirement plan should be reviewed and approved by all parties, including the agency or contractor providing O&M, the owner of the system (if different), and other key personnel.
If the system to be retired is not documented as well as it should be, steps are taken to capture all necessary data and reverse engineer interfaces and any system configuration information that is needed to support a replacement system. Existing databases may need to be exported and translated into a format suitable for the replacement system.
The next activity is to execute the retirement plan and record the results. It's also a good idea to hold a "lessons learned" meeting that includes suggested system improvements. All recommendations should be archived for reference in future system disposals. The O&M contract should be officially closed out if one exists.
4.11.3 Outputs.
A system retirement plan will be generated that describes the strategy for removing the system from operation and disposing of it. Its execution will result in the retirement of the ITS system. The final system configuration, including hardware, software, and operational information, will be documented and archived, together with a list of "lessons learned".

No comments:

Post a Comment