Détail de l'auteur
Auteur Brij Kishore PANDEY |
Documents disponibles écrits par cet auteur (1)
Ajouter le résultat dans votre panier Faire une suggestion Affiner la recherche
Building ETL Pipelines with Python : Create and deploy enterprise-ready ETL pipelines by employing modern methods / Brij Kishore PANDEY / PACKT PUBLISHING (2023)
Titre : Building ETL Pipelines with Python : Create and deploy enterprise-ready ETL pipelines by employing modern methods Type de document : e-book Auteurs : Brij Kishore PANDEY Editeur : PACKT PUBLISHING Année de publication : 2023 ISBN/ISSN/EAN : 9781804615256 Note générale : copyrighted Langues : Anglais (eng) Résumé : Develop production-ready ETL pipelines by leveraging Python libraries and deploying them for suitable use casesKey FeaturesUnderstand how to set up a Python virtual environment with PyCharmLearn functional and object-oriented approaches to create ETL pipelinesCreate robust CI/CD processes for ETL pipelinesPurchase of the print or Kindle book includes a free PDF eBookBook DescriptionModern extract, transform, and load (ETL) pipelines for data engineering have favored the Python language for its broad range of uses and a large assortment of tools, applications, and open source components. With its simplicity and extensive library support, Python has emerged as the undisputed choice for data processing. In this book, you’ll walk through the end-to-end process of ETL data pipeline development, starting with an introduction to the fundamentals of data pipelines and establishing a Python development environment to create pipelines. Once you've explored the ETL pipeline design principles and ET development process, you'll be equipped to design custom ETL pipelines. Next, you'll get to grips with the steps in the ETL process, which involves extracting valuable data; performing transformations, through cleaning, manipulation, and ensuring data integrity; and ultimately loading the processed data into storage systems. You’ll also review several ETL modules in Python, comparing their pros and cons when building data pipelines and leveraging cloud tools, such as AWS, to create scalable data pipelines. Lastly, you’ll learn about the concept of test-driven development for ETL pipelines to ensure safe deployments. By the end of this book, you’ll have worked on several hands-on examples to create high-performance ETL pipelines to develop robust, scalable, and resilient environments using Python.What you will learnExplore the available libraries and tools to create ETL pipelines using PythonWrite clean and resilient ETL code in Python that can be extended and easily scaledUnderstand the best practices and design principles for creating ETL pipelinesOrchestrate the ETL process and scale the ETL pipeline effectivelyDiscover tools and services available in AWS for ETL pipelinesUnderstand different testing strategies and implement them with the ETL processWho this book is forIf you are a data engineer or software professional looking to create enterprise-level ETL pipelines using Python, this book is for you. Fundamental knowledge of Python is a prerequisite. Nombre d'accès : Illimité En ligne : http://library.ez.neoma-bs.fr/login?url=https://www.scholarvox.com/book/88947122 Permalink : https://cataloguelibrary.neoma-bs.fr/index.php?lvl=notice_display&id=581275
LIBRARY - Campus Rouen
NEOMA Business School
pmb
-
59 Rue Taittinger, 51100 Reims
-
00 33 (0)3 26 77 46 15
Library Campus Reims
-
1 Rue du Maréchal Juin, BP 215
76825 Mont Saint Aignan cedex -
00 33 (0)2 32 82 58 26