Optimizing performance-sensitive and data-driven applications is a multi-objective problem. Computer scientists have devised a number of different strategies for tackling this problem, including new algorithms, frameworks and libraries, scientific simplifications, mathematical optimization, auto-parallelization, and even auto-tuning. Most of this work is already a burden for researchers and may take years of timely investment. Still, most of these strategies assume hardware is known beforehand and remains the same throughout the application execution. But what if the hardware is unknown? What if we could design it from scratch? What if we could optimize starting from the hardware side? This talk focuses on the bigger challenge of co-designing infrastructures that are efficient and cost-effective for known scientific problems. We assume software and hardware can be co-designed and evolve together in synergy to better extract performance and produce faster, more reliable and accurate scientific results in a scalable manner. In a time where funding is ever more constrained we also cover strategies based on market intelligence for making the most out of research grants, including discussions on scaling up vs out, unworthy saving moves and sharing economy.