<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Martin Gatto]]></title><description><![CDATA[Stories and ideas.]]></description><link>http://martin-gatto.com/</link><generator>Ghost 1.23</generator><lastBuildDate>Tue, 07 Apr 2026 13:09:53 GMT</lastBuildDate><atom:link href="http://martin-gatto.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The Evolving Strategic Role of the IT Architect in the AI Era]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In a previous article, &quot;<a href="http://martin-gatto.com/what-architecture-is/">What Architecture is</a>&quot;, I highlighted the role of the IT Architect as a strategic figure within organizations. This professional is not merely a technical expert but also a critical thinker who aligns technology strategy closely with business goals, ensuring that systems and solutions are</p></div>]]></description><link>http://martin-gatto.com/ai-and-architecture/</link><guid isPermaLink="false">653b77a373dd0504325f46cb</guid><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Fri, 27 Jun 2025 10:43:16 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2025/06/gerard-siderius-YeoSV_3Up-k-unsplash--1-.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://martin-gatto.com/content/images/2025/06/gerard-siderius-YeoSV_3Up-k-unsplash--1-.jpg" alt="The Evolving Strategic Role of the IT Architect in the AI Era"><p>In a previous article, &quot;<a href="http://martin-gatto.com/what-architecture-is/">What Architecture is</a>&quot;, I highlighted the role of the IT Architect as a strategic figure within organizations. This professional is not merely a technical expert but also a critical thinker who aligns technology strategy closely with business goals, ensuring that systems and solutions are robust, scalable, and future-proof.</p>
<p>As we stand at the brink of a transformative AI era, this strategic function becomes even more crucial. Emerging trends, underscored by compelling statistical forecasts, clearly point to a rapid integration and profound impact of AI technologies in the coming years.</p>
<blockquote>
<p><strong>Key Statistical Insights:</strong></p>
</blockquote>
<ul>
<li>
<p>By 2030, AI could contribute up to $15.7 trillion to the global GDP, increasing labor productivity by up to 40% (PwC).</p>
</li>
<li>
<p>Over 60% of enterprises are expected to adopt generative AI technologies by 2026 for automating both creative and routine operational tasks (Gartner).</p>
</li>
<li>
<p>By 2026, 90% of digital customer interactions will be managed by advanced AI-driven systems (Gartner).</p>
</li>
<li>
<p>The market for autonomous intelligent agents is predicted to grow fivefold between 2023 and 2030 (McKinsey &amp; Co).</p>
</li>
<li>
<p>Investments by large enterprises in responsible AI practices (ethics, transparency, compliance) are projected to reach 75% by 2027 (IDC).</p>
</li>
</ul>
<blockquote>
<p><strong>AI Trends and Future Outlook:</strong></p>
</blockquote>
<p>These statistics signal a clear direction: AI technologies are rapidly becoming integral components of organizational strategy. Generative AI is set to revolutionize operations across various industries, while autonomous AI agents will increasingly handle complex decision-making processes. The growth in AI-driven customer interactions indicates a profound shift in business-customer dynamics, requiring a more strategic and thoughtful approach to technology deployment.</p>
<p>At the same time, regulatory frameworks and ethical considerations around AI (e.g., the European AI Act) will mandate careful governance and transparent use of AI systems, underscoring the necessity for clear policies and responsible innovation.</p>
<blockquote>
<p><strong>The Architect's Strategic Role in the AI Evolution:</strong></p>
</blockquote>
<p>Given these developments, the strategic role of architects—Enterprise Architects, Solution Architects, and Data Architects—becomes pivotal in navigating the AI landscape effectively.</p>
<p>Enterprise Architects must articulate a clear AI strategy aligned with organizational goals and market demands. They will be instrumental in identifying opportunities for AI-driven innovation, managing risk, and ensuring compliance with evolving regulations and ethical guidelines. Their role as strategic visionaries will ensure seamless integration of AI into the broader enterprise architecture.</p>
<p>Solution Architects will be essential in translating AI strategies into actionable projects. They will oversee the practical application of AI technologies, ensuring they are implemented in a manner that <strong>delivers tangible business value</strong>. They must balance technical possibilities with organizational capabilities, optimizing for scalability, resilience, and compliance.</p>
<p>Data Architects will serve as the backbone of effective AI implementation, ensuring robust, reliable, and high-quality data management practices. Their expertise will facilitate the development of scalable data platforms necessary for sophisticated AI models and autonomous agents, and they will establish frameworks for data governance and ethical data usage aligned with regulatory standards.</p>
<p>Conclusion:</p>
<p>As AI reshapes the technological landscape, architects across all specializations must embrace a heightened strategic role. Their collective expertise will <strong>guide organizations through the complexities of AI adoption</strong>, delivering solutions that are not just technologically sound but also ethically responsible, compliant, and strategically impactful. In doing so, architects will reinforce their position as essential strategists, key to navigating the AI-driven future successfully.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Lessons learned on using CDC for real-time integrations]]></title><description><![CDATA[My experience with CDC integrations and some helpful tips before tackling them.]]></description><link>http://martin-gatto.com/lessons-learned-on-using-cdc-for-real-time-integrations/</link><guid isPermaLink="false">641975e54b4fb50434d37513</guid><category><![CDATA[Data Architecture]]></category><category><![CDATA[Data Integrations]]></category><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Tue, 21 Mar 2023 11:58:09 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2023/03/caos.jpeg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://martin-gatto.com/content/images/2023/03/caos.jpeg" alt="Lessons learned on using CDC for real-time integrations"><p>The first question is... &quot;is a real-time integration necessary&quot; and the answer will always be YES. Why? This is because the business in its rationale considers that information in real time is money and that is its goal, so IT architecture must be the facilitator of that goal. Of course as architects we can dissuade them from alternative solutions, but at the end of the day, if they need the data in real time, that's how it should be.</p>
<p>Now, since the work of going for a real-time integration solution is unavoidable, my intention here is to highlight some issues that are important to take care of for the health of the architecture and ours.</p>
<blockquote>
<p><strong>About the source Database:</strong></p>
</blockquote>
<ul>
<li><strong>Be very careful with storage:</strong> As obvious as it may seem, you should keep in mind that for every database transaction that is created, one or more cdc transactions will be created. This implies that you must have within your reach a correct measurement of the data volumes that this source generates, but even more so, the number of transactions by type. For example, an update transaction does not generate a new record in the table to be replicated, but will generate two cdc transactions (cdc records) in the sql server cdc table. Also, you have to work as a team with your team of DBAs to manage the debugging of that information from time to time. The tables will grow, and when they do the performance will decrease (in the case of cdc using tables like Sql Server).</li>
<li><strong>Keep track of the number of sessions</strong> that the integration solution will create in the database and be aware that the latter can withstand this pressure.</li>
<li><strong>Do not dismiss networking issues</strong>, the volume of data per unit of time to be moved must be supported by the network bandwidth. In general this is not a problem, but it is an item to consider.</li>
<li><strong>Understand and keep in mind the number of queries per second</strong> that the database supports. This is a very important item that translates into the pressure that the database will support.</li>
<li><strong>Always try to do the integrations using logs</strong> and avoid whenever you can the use of tables. There are different solutions and this varies a lot depending on the database you have. But if there is the option to choose, it will be the best option to opt for an integration through logs (for example: redologs in the case of Oracle), since this will help to avoid latency problems, objects blocked by other tasks, etc.</li>
</ul>
<blockquote>
<p><strong>About the integration solution:</strong></p>
</blockquote>
<p>All solutions will fulfill the task of integrating the data. The question that will make a real difference between one and the other is the capacity that the solution has to manage the adversities that will arise in the maintenance of the solution that we build.</p>
<ul>
<li>
<p><strong>Initial Loads / Bulk loads:</strong> Keep in mind that the start of the project, when everything is ready, will begin with an initial load of data from the source to the target. It is desirable that the integration solution has the ability to manage this initial load easily.<br>
Keep in mind even more, that when you think that everything is going great and everything works fine, something will happen that will ruin the consistency between what you have at the source and at the destination and this will lead you to the process of having to regenerate that table, tables or the entire database, and you need the solution to handle this incremental load from a specific point or do it from scratch. Keep in mind also, that it is possible that you cannot stop the productive environment (blackout) and you must do this at the same time that the database to be replicated receives transactions.<br>
<strong>IN SUMMARY:</strong> Take care that the solution you choose manages this process efficiently (the initial load) and contemplates the particular issues related to the usability of your database.</p>
</li>
<li>
<p>That the solution allows you to work with regular expressions to define integration tasks that include the selection of objects from the database by patterns and not by their exact name.</p>
</li>
<li>
<p>That your integration solution does not require column-to-column mapping. Let her take care of finding the coincidences of integrating column 1 with column 1 of the same table.</p>
</li>
<li>
<p>It must be possible to manage schema changes (alter column, drop column) and maintain the continuity of your integration. But even more important is that when it detects that a column is no longer there or a new column was created, it replicates this change in the destination. Especially, when the source database is different from the destination one (eg: Sql server to snowflake).</p>
</li>
<li>
<p>That with the first execution, create the tables in destination, and replicate the same structure as in the source. Especially primary keys. The latter is important, since in general the primary keys are a necessary issue in case of requiring the merge of transactions (snowflake for example).</p>
</li>
<li>
<p>That supports automatic scaling of your infrastructure. There will always be contingencies that require it.</p>
</li>
<li>
<p>That the licensing scheme adapts to the new market requirements (pay per use, price per hour, SaaS) and not old license schemes such as core licenses, which imply in the future that you negotiate more licenses with the provider context of a specific need for escalation and de-escalation.</p>
</li>
</ul>
<blockquote>
<p><strong>Architecture Design Patterns:</strong></p>
</blockquote>
<p>In general, this section is not very complex, but no less important.</p>
<p>It tries to manage and design an architecture that from the conception of its design is fault tolerant. Some of these issues are covered by some of the previous points (for example: automatic scaling). Even so, it is important to reflect that considering all the previous points, we must make sure that everything continues working in the event of a problem, or at least not everything stops working.</p>
<p>I like at this point to highlight my recommendation to create a semantically decoupled architecture according to its responsibilities.</p>
<p>This will give the benefit of having assets that are fault tolerant, resilient and of a management of the <a href="https://en.wikipedia.org/wiki/Entropy">entropy</a> of the system more agile and clear.</p>
<p>To do this, we can choose to use a messaging broker or a storage account that allows us to download the data first as a broker offset or as a file in a storage account of our cloud subscriber. This will help us to have a recovery point in case of errors (since the data is stored for a while in the broker or in the storage account) and additionally it will allow us to play with the use of different clients deployed in different availability zones and that these data can be used if required, by other consumers (<a href="https://www.databricks.com/glossary/lambda-architecture#:~:text=Lambda%20architecture%20is%20a%20way,problem%20of%20computing%20arbitrary%20functions.">lambda architecture principle</a>)</p>
<p><img src="http://martin-gatto.com/content/images/2023/03/Captura-de-pantalla-2023-03-21-a-las-12.31.43.png" alt="Lessons learned on using CDC for real-time integrations"></p>
<p>These are some, but not all of the issues I would recommend looking out for. Of course, you can always add more, but it's important to consider the type of solution you want. From my experience, these are the basic things and on top of that it is possible to add more complexity and functionality.</p>
<p>I want to thank you for your time.</p>
<p>Martin.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Dependency is the enemy of resilience.]]></title><description><![CDATA["Dependence is the enemy of resilience."
A brief reflection on a quality attribute -> Resilient solutions, and its relationship to concepts such as semantic coherence, decoupling, high cohesion ......]]></description><link>http://martin-gatto.com/dependency-is-the-enemy-of-resilience/</link><guid isPermaLink="false">60592bae4b4fb50434d37481</guid><category><![CDATA[Enterprise Architecture]]></category><category><![CDATA[Data Architecture]]></category><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Tue, 23 Mar 2021 01:07:19 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2021/03/resi.jpeg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://martin-gatto.com/content/images/2021/03/resi.jpeg" alt="Dependency is the enemy of resilience."><p>Today I received an email from a Cloud service provider, reporting the lessons learned from a general problem they had with their connectivity services to the public network and their dependencies.</p>
<p>The point is that although in terms of datacenter, many times the answer to these problems for rapid recovery from adversity is redundancy, in terms of data architecture and in general in architecture, the solution is not always to duplicate to be resilient to failures.</p>
<p>A few years ago, talking with a colleague, I explained how to segment the data catalog into domains, thus obtaining a logical and physical separation between the needs of the different domains. This would give us the benefit of being able to create solutions with a decoupling of the requirements of two different domains and also with this that the modifiability of an application / data entity of domain &quot;A&quot; does not imply an indirect cost in an application or data domain. domain &quot;B&quot;.<br>
This is how we addressed in this talk the concept of semantic coherence, a concept already well known (but little applied in general) that in my opinion is key to solving a quality attribute in the design of an architecture such as resilience management. <a href="https://martinfowler.com/articles/data-mesh-principles.html">I recommend this read on Mesh Data which is very interesting.</a>.</p>
<p><img src="http://martin-gatto.com/content/images/2021/03/IT-Resilience.jpeg" alt="Dependency is the enemy of resilience."></p>
<p>This is a very named concept, directly or indirectly today when talking about microservices, apis, kubernetes, etc. All of them, concepts that aim to decouple responsibilities in the architectures and create a high resilience against failures and the need for horizontal scaling among other concepts such as cohesion management.</p>
<p>A more extended summary of this matter in terms of data, is what I write in the article <a href="http://martin-gatto.com/data-lakes-dr/">Data Lake is more than 'dump your data here'</a> . Article I wrote motivated after that talk I mentioned about domain segmentation.</p>
<p>Also, I refer to this, when I talk about a definition of the meaning of IT architecture, when I comment that architecture is not only about designing applications and defining the rules of the game, but how these applications are related in a strategy ---&gt; <a href="http://martin-gatto.com/it-architecture-and-the-relationships/">IT architecture and the importance of the relationships</a></p>
<p><strong>Dependence is the enemy of resilience</strong>, a phrase that I liked a lot to define why semantic coherence is a practice that should be on the design board when designing and generating an architecture.</p>
<p>I think that thinking in terms of the decrease in dependencies between domains, dependencies between applications, dependencies with providers opens a new paradigm that is already heard in the market in general but not yet very frequently, about multiCloud / HibridCloud.</p>
<p>Tks.</p>
</div>]]></content:encoded></item><item><title><![CDATA[¿What IT Architecture is?]]></title><description><![CDATA[My reflections on what IT architecture is, navigating two key premises. Togaf and the book by Eben Hewitt]]></description><link>http://martin-gatto.com/what-architecture-is/</link><guid isPermaLink="false">5ff8353afea978087d414434</guid><category><![CDATA[Enterprise Architecture]]></category><category><![CDATA[Data Architecture]]></category><category><![CDATA[Application Architecture]]></category><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Wed, 13 Jan 2021 16:34:14 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2021/01/hombre-de-vitruvio-tridimensional-600x618.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://martin-gatto.com/content/images/2021/01/hombre-de-vitruvio-tridimensional-600x618.jpg" alt="¿What IT Architecture is?"><p>For quite some time I have noticed that in the market in general there is a distortion, or rather, a lack of clear definition of what IT architecture really is. Seeing how the architecture department works in many organizations, I see that the architecture department really ends up acting as a battalion of experts in different things, but without addressing the real problems that make up the architecture of an organization and its strategy.</p>
<p>I have collaborated in places where when I began to understand the architecture area of each of these organizations, I noticed a common pattern where I identified that:</p>
<ul>
<li>They had not had an IT architecture map for a long time, or it had never been made.</li>
<li>There was no definition of standards about how things are done or who does them.</li>
<li>IT Architecture, did not have a clear, defined and documented methodology on how the organization's architecture is managed and even more evolved. It comprised a group of experts who dealt with demand overflows and specialized aspects of this, sometimes playing the role of project leaders or referents and many others as level 3 support.</li>
<li>I noticed a total absence of using proven frameworks or methodologies for modeling and management as accelerators for catalog management and its connection with the business. In other cases an excessive use of these that created an unnecessary bureaucracy.</li>
<li>Architectures whose change or evolution only responded to the rationale for approval and condition on the cost of development, implementation and maintenance of the solution and not to a strategically clear definition. The design rationale was &quot;how much does it cost?&quot;</li>
</ul>
<br>
<br>
<p><strong>Addressing some definitions</strong></p>
<blockquote>
<p><a href="https://pubs.opengroup.org/architecture/togaf91-doc/arch/chap03.html">TOGAF</a>:  The structure of components, their inter-relationships, and <strong>the principles and guidelines governing their design and evolution over time</strong>.  .</p>
</blockquote>
<p>The architect, if we approach the TOGAF definition (in my opinion the clearest and most precise), is the person in charge or custodian of said component structure, safeguarding the strategy of how they should <a href="http://martin-gatto.com/it-architecture-and-the-relationships/">relate</a> thus taking care of the semantics of the architecture and putting on the table the rules of the game on how these components and their relationships are governed, designed and evolved over time <strong>so that technology is a facilitator of business objectives</strong>. An interesting factor that arises from this definition, which I see as an immediate result, is that architecture ends up creating the mechanisms and knowledge for decision-making.</p>
<p>A comment that I like to add is that the architect is knowledgeable about many things related to the IT world, but he is also a business individual and especially <strong>he is a strategist,</strong> since what he seeks to solve is how these components and Their relationships will evolve over time to facilitate the objectives of an organization efficiently and effectively.</p>
<blockquote>
<p><strong>Other points of view</strong></p>
</blockquote>
<p>Some time ago, I was looking for the correct definition of all these matters to put the correct titles and words. I had a lot of things flying around in my head and I needed to somehow give them a first and last name.</p>
<p>In my eagerness to expand these horizons, I have read the book <a href="https://www.amazon.es/Technology-Strategy-Patterns-Architecture-English-ebook/dp/B07JJNSP92/ref=tmm_kin_swatch_0?_encoding=UTF8&amp;qid=&amp;sr=">&quot;Technology Strategy Patterns: Architecture as Strategy&quot;</a> de [Eben Hewitt] which was very helpful to me, since in some way it opened up my vision much more to put things into correct words and also understand the aspects of the practice of architecture that really give meaning to its function... also to establish a strategy and define those <strong>principles and guidelines governing</strong> to manage the architecture as a strategic plan instead of doing it as a group of experts who simply define whether to use docker or kubernetes, Java or Python, Oracle or SqlServer, etc ( that these are just some of the functions of a true architecture team).</p>
<br>
<br>
<blockquote>
<p><a href="https://en.wikipedia.org/wiki/Vitruvius">Vitruvius</a></p>
</blockquote>
<p>In his first attempt to define it, he surprisingly explains this analogy with <strong>Vitruvius</strong>.<br>
<br></p>
<p><img src="http://martin-gatto.com/content/images/2021/01/1024px-Da_Vinci_Vitruve_Luc_Viatour.jpg" alt="¿What IT Architecture is?"></p>
<p>Vitruvius (Vitruvio in Spanish), He was known as the father of architecture.<br>
He was also the one who, in 40 BC, invented the idea that all buildings should have three attributes: <strong>firmitas, utilitas, and venustas, meaning: strength, utility, and beauty.</strong></p>
<p>Vitruvius, by declaring these three premises, what he did was declare the principles of architecture, explaining in some way that (I will refer to the book):</p>
<ul>
<li>
<p><strong>Strength:</strong> It's not necessarily building buildings or solid pieces of software. The buildings are solid but they are designed to be flexible, which transposed to the field of modern IT, I interpret as the design of robust solutions, designed to last over time, but also flexible to adapt and change.</p>
</li>
<li>
<p><strong>Utility</strong>: designed for the user. I think already around the time of 80–70 BC – after c. 15 BC,</p>
<ul>
<li>Vitruvio was one of the first to propose the first definition of UX (User Experience) associated with architecture principles, or at least that user experience is important.</li>
</ul>
</li>
<li>
<p><strong>Beauty</strong>: It does not reflect it in the strict sense of what enters through the eyes of the viewer. It is a way of capturing harmony, meaning and form.</p>
<ul>
<li>In terms of IT Architecture, I could infer that he was one of the first to coin the definition of Semantic Coherence of architecture.</li>
</ul>
</li>
</ul>
<p>Continuing my reference to Eben's book:</p>
<p><strong>The role of the architect</strong></p>
<p>[...]<br>
<em>&quot;The architect is hopefully not concerned with low-level details of the code itself inside one system, but is more focused on where data-center boundaries are crossed, where system component boundaries are crossed. Here’s my definition of an architect’s work: it comprises the set of strategic and technical models that create a context for position (capabilities), velocity (directedness, ability to adjust), and potential (relations) to harmonize strategic business and technology goals. Notice that in this definition, the role of the architect and technology strategist is not to merely serve the business but to play together. I have been in shops where technology was squarely second fiddle, a subservient order-taking organization to support what was deemed the real business.</em><br>
<em>That’s no fun for creative people who have something to contribute. But more importantly, I submit that businesses, now more than ever, cannot sustain such a division, and to create greater competitive advantage must work toward integration with co-leadership.</em></p>
<p>Over my 20 years in this field, I’ve come to conclude that there are three primary concerns of the architect:</p>
<ol>
<li>Contain entropy.</li>
<li>Specify the nonfunctional requirements.</li>
<li>Determine trade-offs.<br>
[...]...<br>
Hewitt, Eben. Technology Strategy Patterns (pp. 11-12). O'Reilly Media</li>
</ol>
<blockquote>
<p>My conclusions</p>
</blockquote>
<p>With this entry, I do not pursue the goal of removing those java ninjas who work as architects from the title of Architect. Or the data architect who knows a lot about python and gets into the mud to move things forward is not an architect for being in those details.<br>
<br><br>
My goal in citing this book and Togaf's definition is that clearly, either for one or the other, the architect's primary concern should be strategy on how to make that architecture an enabler for the business to meet its goals.<br>
<br><br>
A good data architect may not be a galaxy-level expert in making distributed processing programs, but he is the one who can work with those experts on the strategy of when it is time to use a Big Data cluster and thus have distributed processing to achieve those objectives that the commercial area has. He is a strategist.<br>
This same person may not be an expert in the development of applications based on microservices, but he has the vision and experience that the strategy of translating the business logic into store procedures in the database is not a good strategy on how to manage that business logic and over time it will not be scalable not to mention that it will create vendor lock problems with that database provider.</p>
<p>Thanks for your time.</p>
<p>M.</p>
</div>]]></content:encoded></item><item><title><![CDATA[IT architecture and the importance of 
 the relationships]]></title><description><![CDATA[The It architecture and the importance of your relationships. How important are the relationships between components in the IT architecture? How I use Archimate and Neo4J to map these relationships and publish them, look for patterns and monitor the health of the architecture.]]></description><link>http://martin-gatto.com/it-architecture-and-the-relationships/</link><guid isPermaLink="false">5d9c47c215e6184605a9c71b</guid><category><![CDATA[Enterprise Architecture]]></category><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Mon, 11 Jan 2021 10:24:44 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2021/01/Technology.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://martin-gatto.com/content/images/2021/01/Technology.jpg" alt="IT architecture and the importance of 
 the relationships"><p>Some time ago, I listened a presentation related to various definitions of what IT Architecture really is, with their respective contrasts related to what many companies usually confuse about what IT Architecture really is. I think this confution, deserves a special article that I will write soon but for the moment I want to focus on relationships</p>
<p><img src="http://martin-gatto.com/content/images/2020/03/relationship.jpg" alt="IT architecture and the importance of 
 the relationships"></p>
<p><strong>¿So.... what is Architecture?:</strong></p>
<blockquote>
<p>The structure of components, their inter-relationships, and the principles and guidelines governing their design and evolution over time.  (<a href="https://pubs.opengroup.org/architecture/togaf91-doc/arch/chap03.html">TOGAF</a>)</p>
</blockquote>
<p>And this is where I would like to zoom in and dedicate an entry in this blog, <strong>I want to talk about relationships, their modeling and the value that give you in the IT strategy.</strong></p>
<p>In my opinion, beyond any formal definition, the strategy of relationships and how they are managed and governed, is a key factor in the architectural strategy's approach</p>
<p>They give you the capability to know how your application layout:</p>
<ul>
<li>It's connected with your business process.</li>
<li>How a business function is related with the application that support that function.</li>
<li>How your data flow between applications.</li>
<li>Etc.</li>
</ul>
<p>Not long ago, I started to mapp the architecture of a place where I work using <a href="https://www.archimatetool.com/">archimate</a>, applying Togaf as a methodology and, because the company is a service company,  I'm using the <a href="http://https://www.tmforum.org/">TmForum</a> Framework to adopt a reference. I have to clarify that company is not a telecommunications company but yes it is a service company, and the methodological framework offered by Togaf and TmForum are a great references and a great help to have a methodology and a reference standard.</p>
<p><img src="http://martin-gatto.com/content/images/2020/03/TOGAF-ADM-with-Lifecycles-700.png" alt="IT architecture and the importance of 
 the relationships"></p>
<p>Taking these premises, I used a method that allows me to map processes, data, applications and technology in a single place and thus be able to understand the relationships between them and act accordingly having this knowleadge in my dashboard (archimate).<br>
To do this I started using Archimate, modeling four main universes (Data, Applications, Processes and Technology), grouping each of these universes in this solution and modeling the objects of each universe grouping them into domains (eg: customer, product, resource, etc) . Resulting something like this:</p>
<p><img src="http://martin-gatto.com/content/images/2020/05/Captura-de-pantalla-2020-05-24-a-las-8.48.20.png" alt="IT architecture and the importance of 
 the relationships"></p>
<p>The challenge is that each of these entities has their respective secondary entities, also has relationships to the data they manage and also to the business processes and application. The first thing I thought was <strong>&quot;when this grows, it should be easy to interpret and you should be able to &quot;navigate&quot; over this information.&quot;</strong></p>
<p>Clearly, if you have gotten through reading this far, and have been working in IT for a while, you have the answer ..... Exactly .... that's right ..... GRAPH DataBase !!!!. What better way to formalize the architecture in a database and obtain as a result the ability to consult it quickly, by anyone who needs it and that the database is a mechanism for analysis and management of relationships.</p>
<p><strong>My experience:</strong></p>
<p>Without being an expert in Neo4J (I still am not), and with few tools, I was able to quickly get the following experience:</p>
<ul>
<li>Archimate has a plugin for managing the model with a database. The <a href="https://github.com/archi-contribs/database-plugin/wiki">plugin</a>  is very good, since in terms of its management with relational databases, it allows you to version the modeling, save versions, etc. But the problem that I found in my experience as a user is that in my tests it was unstable, sometimes it allowed me to do what I wanted and when I repeated the same step after some changes, it gave an error.</li>
<li>In line with the aforementioned plugin, it also allows export to a database like neo4j. In my experience it works fine (I didn't use it much either), but the problem I ran into is the &quot;how the plugin builds objects&quot; inside the data  base. I mean, for each entity, relationship, and property, the plugin creates a graph and a relationship. In this case, it was not what I was looking for, since with how I modeled the objects, the properties are a specific attribute of the entity and not a separate but related object.</li>
</ul>
<p><img src="http://martin-gatto.com/content/images/2020/05/Captura-de-pantalla-2020-05-24-a-las-9.13.37.png" alt="IT architecture and the importance of 
 the relationships"></p>
<p>So before scripting in python or some programming language I decided to do my test, simply with excel and the Archimate export to .csv files. Where the export result is three files, one for the elements, another for the relationships and finally the properties.<br>
In the file, I built the structure of each export file and added a small and very simple column, with an excel formula where concatenate de differents parts of the Neo4j statement where it inserts each line depending on what it is:</p>
<ul>
<li><em><strong>Elements</strong></em>: inserts them.</li>
<li><em><strong>Relationships</strong></em>: Create them.</li>
<li><em><strong>Properties</strong></em>: It makes an update of the previous two.</li>
</ul>
<p><img src="http://martin-gatto.com/content/images/2020/07/Captura-de-pantalla-2020-07-28-a-las-10.56.08.png" alt="IT architecture and the importance of 
 the relationships"></p>
<p><img src="http://martin-gatto.com/content/images/2020/07/Captura-de-pantalla-2020-07-28-a-las-10.56.22.png" alt="IT architecture and the importance of 
 the relationships"></p>
<p><img src="http://martin-gatto.com/content/images/2020/07/Captura-de-pantalla-2020-07-28-a-las-10.56.34.png" alt="IT architecture and the importance of 
 the relationships"></p>
<p>The excel with the data resulting from the export builds the Neo4J statements and &quot;wuala&quot;, after executing the resulting statements, we can have an architectural interface to publish catalogs, dependencies and analyze RELATIONS, etc.</p>
<p>It is important, beyond any standard and modeling method, to maintain the order of the objects, since this will also make the usability of the catalog and its understanding. In my case, as I use the grouping logic of TOGAF / TMForum, I have three great universes which I transpose with the use of the &quot;Group&quot; element of Archimate which allows me to keep my work board organized (data elements, related to groups of data elements and so on with everything else).</p>
<p>*For example:<br>
<img src="http://martin-gatto.com/content/images/2020/05/Captura-de-pantalla-2020-05-24-a-las-12.41.56.png" alt="IT architecture and the importance of 
 the relationships"></p>
<p>It is also very important to take some time before modeling to define the properties of each element and have the behavior of completing the fields. This will allow, in the future, to look for patterns that allow analyzing the health of architecture. For example, you could consult the Database:</p>
<ul>
<li>&quot;All nodes of type Application, with more than one relation to a node of type Application Function &quot;and thus find semantic coherence problems where more than one application is resolving one business functional.</li>
<li>&quot;All objects of type Data Entity with the label GDPR Sensitive with their relation to the node type &quot;Application&quot; and thus be able to list the applications that manage data regulated by GDPR.</li>
</ul>
<blockquote>
<p>As you see, this is the important of the relations.</p>
</blockquote>
<p><img src="http://martin-gatto.com/content/images/2020/07/Captura-de-pantalla-2020-07-27-a-las-15.49.01.png" alt="IT architecture and the importance of 
 the relationships"></p>
<p>All objects have properties, and the secret of this strategy is in the properties of the objects and the organization to make these an interactive map from which you can obtain value.</p>
<p><img src="http://martin-gatto.com/content/images/2020/07/Captura-de-pantalla-2020-07-27-a-las-15.50.24.png" alt="IT architecture and the importance of 
 the relationships"></p>
<p>As you will notice, the use and potential is infinite, it is only a matter of devising your modeling strategy and the behavior of doing it to obtain the value of the data that your own IT architecture generates in the relationships of the objects.</p>
<p><img src="http://martin-gatto.com/content/images/2020/07/Captura-de-pantalla-2020-07-27-a-las-15.51.20.png" alt="IT architecture and the importance of 
 the relationships"></p>
<blockquote>
<p><strong>¿Wait.... But how looks like this in a Graph Database?:</strong></p>
</blockquote>
<p>Set the quality in HD:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/DV7furtLVZE" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p>To conclude, in my experience, the relationship management is almost everything in terms of architecture management and its strategy. It is impossible to imagine an area of architecture that thinks about its roadmap and solutions without considering the implications it has with the rest of the ecosystem <strong>and even more so without knowing and documenting how the application ecosystem is related</strong>.</p>
<p>Thanks for your time.</p>
<p>Martin.-</p>
</div>]]></content:encoded></item><item><title><![CDATA[Data Architecture strategy for Analytical Environments]]></title><description><![CDATA[How a Enterprise Data Architect can help you to think in your Analytical strategy?.]]></description><link>http://martin-gatto.com/analytical-strategy/</link><guid isPermaLink="false">5b16c50f53de814f8b51b019</guid><category><![CDATA[Data Architecture]]></category><category><![CDATA[Business Intelligence]]></category><category><![CDATA[BigDataArchitecture]]></category><category><![CDATA[Big Data]]></category><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Mon, 17 Sep 2018 13:59:39 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2018/09/header_publicacion.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><blockquote>
<img src="http://martin-gatto.com/content/images/2018/09/header_publicacion.jpg" alt="Data Architecture strategy for Analytical Environments"><p><em><strong>Context:</strong></em></p>
</blockquote>
<p>I worked for around 10 years in projects and technologies related with data (Business Intelligence, Big Data, Data Integrations, etc) and the last four years in the Data Architect Role. Many times I saw how the Analytics / Business Intelligence / Big Data projects explode and simple fail to arrive to the objectives for many reasons.<br>
The main ones are the absence of Data Architecture strategies related with this kind of projects.<br>
So, the context is how the Data Architecture can help to the analytics teams to make strong data models, help the business and survive in the way to complete this objective.</p>
<blockquote>
<p><em><strong>Introduction:</strong></em></p>
</blockquote>
<p>We have a lot of books related with this topic, but I think this books does not help when we try to create a Big Data or business intelligence strategy, this is beacause the major of this books try to explain the business intelligence from the data modeling and not from the corporate strategy and the reasons because some companies needs one Big Data or Business Intelligence systems. Another point is the problem where does not exist the union between this two vital concepts: The strategy and the good practices to make an analytics environment.</p>
<p>Before writing this post, I read several books related to this topic and I did not find any that answered my questions ----&gt; &quot;How to think an Analytics Strategy in relation with the Corporate Data Architecture Plan and Corporate Strategy&quot;.</p>
<p>These bibliographies are very good and talk about concepts related to the practice related to Analytical environments and technology associated to this environments. For example:</p>
<ul>
<li>Think your model in relation with your <strong>Business Process</strong>.</li>
<li>Save the data history and capture your data changes and save its too in the dimensions (Slowly changing dimension).</li>
<li>Use subrogated keys.</li>
<li>Your Strategy, has not to be a part of your Infrastructure diagram. Your infrastructure diagram has to be a part of your strategy.</li>
</ul>
<p>So, thinking in this context, and writing this not short introduction, I will write about this and take some examples using Architectures Frameworks and try to think the Data Architecture Strategy to make the Big Data or Business Intelligence Strategy. In my case, I'll use de <a href="https://www.tmforum.org/tm-forum-frameworx-2/">TM Forum framework</a> to explain and make this post.</p>
<blockquote>
<p><em><strong>Post detail:</strong></em></p>
</blockquote>
<p>The architecture frameworks are a real headache, but over the years I made my friendship with them. When you understand how they work, it is at that moment that you understand the value you can get from them.</p>
<p>In my case I started with the Frameworks in my job in the Data Architect Role. My role was Data Architect, and my function was all the things related to the data architecture environments, where Big Data and Analytics was one of them.</p>
<p>I started to learn and work with <a href="https://www.tmforum.org/tm-forum-frameworx-2/">TmForum</a> framework to map the data entities, the relation with de application that manage this data entities and next of that the business process that manage this entities.</p>
<p>The TmForum architecture framework consists of three great universes:</p>
<ul>
<li>The universe of applications (<a href="https://www.tmforum.org/application-framework/">TAM</a> ): Where for each domains that exist in the company, maps the applications related to each domain.</li>
<li>The universe of business processes (<a href="https://www.tmforum.org/business-process-framework/">Etom</a>): Where documents the processes that make the organization work.</li>
<li>The universe of data (<a href="https://www.tmforum.org/information-framework-sid/">SID</a>): Where documents the data entities that make up the organization by its different domains.</li>
</ul>
<p>In this three points we have the key to think in a new way to solve the Big Data and business intelligence strategy and the way about how we think in that.</p>
<blockquote>
<p>How?</p>
</blockquote>
<p>First, we have to response the question what is Analytics Strategy:</p>
<p><strong>The Analytics strategy</strong>, is how you manage your data to transform it in Information. To transform isolated data in information, you have to think your data in terms about how these respond about your business processes. Next of that, you will be able to model your data in this terms, but you have to respond:</p>
<ul>
<li>What processes does my company have?</li>
<li>What the business want to know</li>
<li>Which data represent the processes.</li>
<li>In which applications I have the data I need.</li>
</ul>
<p>To make this points posible, you have to make the relation between:</p>
<ol>
<li>The data Entityes.</li>
<li>The applications that is the owner of  this data.</li>
<li>The process responsible to manage this data.</li>
</ol>
<p>Lets take an example with the sale process related to product activation.</p>
<p>The Etom Framework define many processes, one of this process is &quot;Order to Payment process&quot; that define all the different steps where the customer orders the product and its activate it. Look:</p>
<p><img src="http://martin-gatto.com/content/images/2018/06/Captura-de-pantalla-2018-02-03-a-las-19_opt.png" alt="Data Architecture strategy for Analytical Environments"><br>
<em>tmForum official doc</em></p>
<p><strong>Order to payment process (complet step by step)</strong></p>
<p><img src="http://martin-gatto.com/content/images/2018/09/order_to_payment_process.png" alt="Data Architecture strategy for Analytical Environments"></p>
<p><img src="http://martin-gatto.com/content/images/2018/09/etom_framework_img-compressor.png" alt="Data Architecture strategy for Analytical Environments"></p>
<p>We can see, how the process make to many things between the step where the  customer accept the proposal and the service is ready to use and the invoice is received.</p>
<p>When the people from business says &quot;I want to know how much we sold&quot;, they are trying to say that they want to see the process by which the company:</p>
<ul>
<li>Sell their products,</li>
<li>The performance of the sale process,</li>
<li>The quantities of products from the commercial offer that the company sold,</li>
<li>At what moment after accepting the proposal the customer decides to abandon the process.</li>
<li>Etc.</li>
</ul>
<p><strong>In our language, in this case the business staff wants to know about the &quot;order to payment&quot; process (for this example).</strong></p>
<p>Each step off this process is related with an specific Data Entity/Domain (customer, product, Resources, Services, Market/Sales)</p>
<p>So, once we have identified our process we have to understand and map the relations with each process step with the corresponding data entity.</p>
<p>Look in the following image how the Domains have data entities and they are grouped with the same methodology as eTom:</p>
<p><img src="http://martin-gatto.com/content/images/2018/09/sid_framework_img.png" alt="Data Architecture strategy for Analytical Environments"></p>
<p>Each Domain has entities, and each entity is composed with data objects. For example, in the Customer domain you will find all data entities (Abe's) related with the customer (customer, customer order, customer bill, etc).</p>
<p>And each Abe (data entity´s group) has the definition about that entities and the objects which that is composed it. In this case, the entity &quot;customer&quot; is a partyRole composed by party and these is could be individuals or organizations:</p>
<p><img src="http://martin-gatto.com/content/images/2018/09/party_as_customer.png" alt="Data Architecture strategy for Analytical Environments"></p>
<p>This Definition for each data entitye, not only will give to the organization the definition about how is compose each data entitie, will give you one absolute and very important concept ---&gt; <strong>THE COMMON LENGUEAGE ABOUT YOUR DATA</strong>, because in this example you are defining  what is the customer and how this is composed.</p>
<p>So, once you have the mapping for all your data entityes and process, the work is not finished, you need to merge this three universes to make the relations between them (data, applications and proceses) and think next your strategy about how you will model your analytics environment.</p>
<p>Let's put our hands on some examples to make it more clear:</p>
<p>In this example I´ll work with customer entity for a data entity example and the &quot;order to payment&quot; process to make the relation between processes, apps and data entities.</p>
<blockquote>
<p><strong>Customer Data entities:</strong></p>
</blockquote>
<p><img src="http://martin-gatto.com/content/images/2018/09/Archimate_customer_def.png" alt="Data Architecture strategy for Analytical Environments"></p>
<p>Here we have what it means when we talk about &quot;Customer&quot;. Customer is a PartyRole, composed for &quot;partys&quot; and this &quot;partys&quot; could be Organizations or Individuals. Customer is a relationship between data entities and when we talk about Individuals, we talk about not only customers, this individuals could be partners, employees or another PartyRole.</p>
<blockquote>
<p><strong>Order to payment process:</strong></p>
</blockquote>
<p>The &quot;Order to payment&quot; process (in this example), describe the &quot;step by step&quot; to make posible the customer request into &quot;ready to use&quot; product. Each step is grouped by domains.</p>
<p>So, to define the correct strategy before to start to design the analytical data models and the technical requirements, we have to think about &quot;what the business want to know&quot; and detect that questions reflected in which business process in the company can resolve that question.</p>
<p>Next, we have to solve inside the business process, which data entities are involved in each step, and what mean each step in this process.</p>
<p>We have to make the relations between this two universes (data entities and business processes) and we´ll get all the data entities we need to get from the operational data sources to make the analytical model. Also, we will obtain the data entities that are managed by each of these processes together with their relationship, operation and of course the entities that represent the answers that the business needs to answer.</p>
<p><img src="http://martin-gatto.com/content/images/2018/09/archimate_order_to_payment.png" alt="Data Architecture strategy for Analytical Environments"></p>
<ul>
<li>In Yellow: Business Processes Steps.</li>
<li>In Orange: Data Entities.</li>
</ul>
<p>If you think your Analytical platform in this way, you are thinking in a strategy, where you set your action points centered in the business and the requirements they need to solve.</p>
<p>You are thinking your data models in relation with the business processes and not in relation about the reports the users are making in that moment (commonly a mistake).</p>
<p>If you use the methodology where the data lakes and the analytical models represent the business processes in the company and the cannonical representation of the data entities, your analytical platform will be ever compliance with the business needs.</p>
<p>You can start to define your architecture and the models without any person  from the business, because the business has defined the business processes. So, you need to understand that and start to think on:</p>
<ol>
<li>What my users want to know?</li>
<li>Which business processes exist in my company and how can I make the merge between point one and the B.P?.</li>
<li>What processes should I prioritize?.</li>
<li>Start to define the data structures in the way to represent the data entities and the relations creating this data entities in the canonical form of his definitions.</li>
</ol>
<p>Thanks. :)</p>
<p>King Regards,</p>
<p><a href="http://martin-gatto.com/me-presento/">Martin Gatto</a>.</p>
<p><em><strong>Contact:</strong></em></p>
<ul>
<li><em><strong>Linkedin:</strong></em> <a href="https://www.linkedin.com/in/martingatto/">https://www.linkedin.com/in/martingatto/</a></li>
<li><em><strong>Twitter:</strong></em> <a href="https://twitter.com/gattom83">https://twitter.com/gattom83</a></li>
</ul>
</div>]]></content:encoded></item><item><title><![CDATA[Be happy with Data Flow Integration Technology]]></title><description><![CDATA[Make in a simple way your real time data flow integration and keep developers and operators happy.]]></description><link>http://martin-gatto.com/apache-nifi-good-solution/</link><guid isPermaLink="false">5b16c50f53de814f8b51b018</guid><category><![CDATA[Big Data]]></category><category><![CDATA[BigDataArchitecture]]></category><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Mon, 16 Oct 2017 11:13:00 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2017/10/caos.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://martin-gatto.com/content/images/2017/10/caos.jpg" alt="Be happy with Data Flow Integration Technology"><p><img src="http://martin-gatto.com/content/images/2017/10/wirechaos-compressor.jpg" alt="Be happy with Data Flow Integration Technology"></p>
<h2 id="context">Context:</h2>
<p>For a lot time, I thought how can I solve the problem between the Data Engineers and operational areas and the relation with the development life cycle in the data flow integrations.</p>
<p>To be clear we have three areas:</p>
<blockquote>
<p><strong>Operations</strong>: They monitor the solutions, fix and solve the problems in the production environments, and make posible the application health.</p>
</blockquote>
<blockquote>
<p><strong>Developers</strong>: They make the code, thinking in performance aspects (the final product).</p>
</blockquote>
<blockquote>
<p><strong>Data Architects</strong>: They have to design and translate the business needs and represent the solution architecture, that will be developed by the development team and later managed and monitored by the operations team.</p>
</blockquote>
<p><strong>So. What is the context?</strong></p>
<p>We have too many skills involved in this process, and not all players think in the same way when think in a solution.<br>
This is because, the developer teams will think in terms of the performance and code, operators will think in terms about the manage and monitoring the solution and the Architect have to think in both at the moment zero when he design the solution.</p>
<blockquote>
<p>So, this is the context.... &quot;we had to select a solution to solve our data flow needs, in the context of a large company and simplifying the processes between the operations and development areas, making posible an improvement in the development life cicle  &quot;.</p>
</blockquote>
<h2 id="sointhiscontextweselectedapachenifitosolveourdataflowneedswhy">So, in this context we selected apache nifi to solve our data flow needs... Why?.</h2>
<p><a href="https://nifi.apache.org">Apache Nifi</a> , is an open source tool for Data Flow between systems on the same site or not (between remote sites).</p>
<p><a href="https://nifi.apache.org/docs.html">Apache nifi definition</a>:</p>
<p><img src="http://martin-gatto.com/content/images/2017/10/Captura-de-pantalla-2017-10-08-a-la-s--20.16.30.png" alt="Be happy with Data Flow Integration Technology"></p>
<p>It`s simple to use, quick to learn and if you need, you can buy hortonworks support (they built nifi and released it to the community).</p>
<p><strong>Attributes by which we select this solution:</strong></p>
<ul>
<li>
<p><strong>Simplicity:</strong> We want a solution that has simple to development  and simple in terms for the operation teams.... --&gt; Monitoring, easy to understand &quot;what this integration is trying to do&quot;, easy to debug.</p>
</li>
<li>
<p><strong>Time to market:</strong> Many times, the big data integrations are complex and the process to put in production the code is not easy. We have many processes to transfer the knowledge to Operation teams and develop the monitoring processes to check the integration's health.<br>
In this case, apache nifi give us the possibility to simplify this process and make an a unique point to develop our data flow integrations (not stream processing).</p>
</li>
<li>
<p><strong>Scalable:</strong>  The horizontal schema to scale the solution and the hardware commodity philosophy, brind as the posibility to speed up our growth if we need it.</p>
</li>
<li>
<p><strong>Visual command and control:</strong> The nifi UI make simple the development process and makes possible the coexistence between the operative and development area. Allowing to operators, understand  the integration more easily and developers can achieve deliverables in less time.</p>
</li>
<li>
<p><strong>Clustering:</strong> NiFi is designed to scale-out through the use of clustering many nodes together as described in the next image.</p>
</li>
</ul>
<p><img src="http://martin-gatto.com/content/images/2017/10/nifi_clustering.png" alt="Be happy with Data Flow Integration Technology"></p>
<ul>
<li>
<p><strong>Security:</strong> System to system a dataflow is only as good as it is secure. NiFi at every point in a dataflow offers secure exchange through the use of protocols with encryption such as 2-way SSL. Also, you can manage the security access to nifi with Ldap / AD to make more easy your security schema.</p>
</li>
<li>
<p><strong>Guaranteed Delivery:</strong> The solution, must have fault tolerance and guaranteed the data delivery because we have very sensitive data due to regulatory and legal conditions on this data. The integration is one of the points where the desaster could be a reality. In this case, nifi makes the work be safe and garantue the data delivery between the source and target system.</p>
<p><em>This is achieved through effective use of a purpose-built persistent write-ahead log and content repository. Together they are designed in such a way as to allow for very high transaction rates, effective load-spreading, copy-on-write, and play to the strengths of traditional disk read/writes.</em> (<a href="http://https://nifi.apache.org/docs/nifi-docs/html/overview.html">nifi doc</a> )</p>
</li>
</ul>
<p><strong>My uses cases:</strong></p>
<p>At this moment i work in the bigest ISP and telco provider in Argentina in the Data Architect role (Big Data architecture is included :p ).<br>
The uses cases to think in a data flow solutions are a lot, but I can list some examples:</p>
<p><strong>DHCP logs:</strong> This is a real problem for us, because this data is very very critical. When the police or justice ask to my compañy <em>&quot;who has this ip address at this moment (day:hour:minute)&quot;</em> we have to response this question with presition and quickly. So... here we have &quot;<em>quality attributes</em>&quot;:</p>
<ul>
<li>Security: Because this info is sensitive and critic.</li>
<li>Guaranteed Delivery: We can´t lost data in the dataflow process. Imagine if the justice ask to my company about some case, and you dont have the correct response?.</li>
<li>Scalable: Every year this data source increase it volume. We need scale it quick and the fault tolerance schema is needed.</li>
</ul>
<p><strong>Data Center Logs:</strong> Nifi is amazing for this, because we need integrate syslogs from our servers and nifi works fine with this. It have a Syslog listener and this one parse the syslog message.</p>
<p><img src="http://martin-gatto.com/content/images/2017/10/syslog.png" alt="Be happy with Data Flow Integration Technology"></p>
<p><strong>Service integrations:</strong> We recibe many times, data from systems where we can't go to get the data from the source system, and the provider give us the possibility to send the data in a http message (asynchronous integration). Nifi give us the solution to this and another similar use cases, with the capability to use the &quot;HandleHttpRequest&quot; where some system send me the data in a http request and nifi process this data.</p>
<p><img src="http://martin-gatto.com/content/images/2017/10/nifi_service.png" alt="Be happy with Data Flow Integration Technology"></p>
<p>So, this solution is very flexible, and make simple our work. In fact, nifi makes it possible to transform integrations whose development time involves weeks of work transforming  into a work of only a few days, even in some cases, hours.</p>
<h2 id="anarchitectureexample">An Architecture example:</h2>
<p><img src="http://martin-gatto.com/content/images/2017/10/Captura-de-pantalla-2017-10-19-a-la-s--10.16.06.png" alt="Be happy with Data Flow Integration Technology"></p>
<p>This is a very high level description, but it is fine to understand the examples and the uses cases:</p>
<ul>
<li><strong>Social network:</strong> You can use it to get data from the social networks api´s. Imagine.... You have a new need from the business related with Twitter. So, the normal way, is use java or python to create the integration. Is something like this:</li>
</ul>
<p><img src="http://martin-gatto.com/content/images/2017/10/Captura-de-pantalla-2017-10-14-a-la-s--21.17.01.png" alt="Be happy with Data Flow Integration Technology"></p>
<p>But, think in a big companies where you need to create something in a short time, the operation teams has to understand, monitor and maybe solve a problem with the application, and you use nifi to make reality all this dreams. Look the next image:</p>
<p><img src="http://martin-gatto.com/content/images/2017/10/Captura-de-pantalla-2017-10-14-a-la-s--21.24.09.png" alt="Be happy with Data Flow Integration Technology"></p>
<p>This flow shows how to index tweets with Solr using NiFi.</p>
<ul>
<li><strong>Data Center Syslogs:</strong> You can configure your  <a href="https://www.server-world.info/en/note?os=CentOS_7&amp;p=rsyslog">rSyslog config</a> to send your syslogs to a nifi syslog listener. See the next example</li>
</ul>
<p><img src="http://martin-gatto.com/content/images/2017/10/Captura-de-pantalla-2017-10-14-a-la-s--21.46.24.png" alt="Be happy with Data Flow Integration Technology"></p>
<p>You have to send to the nifi's ip:port address and configure your syslog listener to listen the port you will use to recibe the packets.</p>
<ul>
<li><strong><a href="https://es.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol">DHCP Logs</a>:</strong> You can create a very low latency program with <a href="https://nifi.apache.org/minifi/">apache minifi</a> for tail the DHCP logs and send it to your kafka topic and next of that read the topic with your nifi cluster and next send it to hbase data base.<br>
This example is available for the set top boxes, where you get a log from the S.T.B and them you send to your kafka topic.</li>
</ul>
<h2 id="resume">Resume:</h2>
<p>The uses for nifi are not infinite, but are many because is very flexible and cover in my opinion the three critical points:</p>
<ol>
<li>
<p>Developers teams: Simple to use and learn.</p>
</li>
<li>
<p>Operations teams: Simple to operate, monitor and undertand if the bugs apears.</p>
</li>
<li>
<p>Architecture teams: Simple to use in to many uses cases related to the architecture solutions.</p>
</li>
</ol>
<p>You will get to many benefits from nifi:</p>
<ul>
<li>Simplicity for developers.</li>
<li>Time to market.</li>
<li>Security.</li>
<li>Flexibility.</li>
<li>Easy to understand.</li>
<li>Scalable.</li>
<li>Powerfull data flow move.</li>
<li>The posibility to contract support in hortonworks if you need a help support level 3 | 4.</li>
<li>And the Open Source Community working to get better functionalities and solve bugs.</li>
</ul>
<p>Thanks</p>
<p>King Regards,</p>
<p><a href="http://martin-gatto.com/me-presento/">Martin Gatto</a>.</p>
<p><em><strong>Contact:</strong></em></p>
<ul>
<li><em><strong>Linkedin:</strong></em> <a href="https://www.linkedin.com/in/martingatto/">https://www.linkedin.com/in/martingatto/</a></li>
<li><em><strong>Twitter:</strong></em> <a href="https://twitter.com/gattom83">https://twitter.com/gattom83</a></li>
</ul>
</div>]]></content:encoded></item><item><title><![CDATA[Big data es hablar de equipo.]]></title><description><![CDATA[Data Scientist vs Data Engineers.... Problemáticas entre los distintos actores involucrados....]]></description><link>http://martin-gatto.com/para-comenzar/</link><guid isPermaLink="false">5b16c50f53de814f8b51b015</guid><category><![CDATA[Big Data]]></category><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Wed, 11 Oct 2017 12:15:00 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2017/09/team.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id="problemtica"><em><strong>Problemática:</strong></em></h1>
<img src="http://martin-gatto.com/content/images/2017/09/team.jpg" alt="Big data es hablar de equipo."><p>Básicamente, no hablaré de aspectos técnicos, voy a hablar sobre equipos, voy a describir una problemática en la que me encuentro inmerso todos los días como arquitecto y siempre es un desafío resolver, es decir, voy a escribir mi propuesta.</p>
<p>Mi idea es hablar sobre una problemática de aquellos que ya tienen un Big Data (no importa su tamaño), y no logran avanzar con todos esos casos de uso hermosos que los profetas del <em>Big Big Data</em> gritan a los 4 vientos!!! Es por ello la foto del artículo... ironía, odio a esos profetas.... que ya hablaré de ellos, por no resolver una problemática que va mas allá de los datos.</p>
<p><img src="http://martin-gatto.com/content/images/2018/06/Haka.jpg" alt="Big data es hablar de equipo."></p>
<p>Voy a escribir sobre EQUIPOS involucrados en esta disciplina. La premisa es “como hablamos de equipo cuando hablamos de Big Data….”, aclaro que también hablare de aspectos técnicos en mis siguientes entradas, ya que también estoy preparando una wiki con manuales, laboratorios, tutoriales, etc; ya que no dejo de ser un simple y mortal técnico.</p>
<p><em><strong>Introducción:</strong></em></p>
<p>Desplegamos un Cluster Big Data que puede procesar todo lo que le des, guardar y hacer con los datos lo que sea…. Y ahora?.</p>
<p>Es muy común, y siempre hablando en el contexto de una gran compañía, que un cluster Big Data sea más que bienvenido para solucionar problemáticas con el manejo de los grandes volúmenes de datos, pero también lo es, el que traiga otros que no conocíamos relacionados a su uso, es decir, su monetización y todo lo referente al trabajo de los diferentes equipos que con el interactuan para sacar el <strong>valor</strong>.</p>
<blockquote>
<p><em>Entonces, ¿Cuál es la problemática con los equipos?...</em></p>
</blockquote>
<p>Basicamente, esto no se trata de una actividad donde existe un requerimiento y el área de IT lo desarrolla como suele suceder, aquí debe existir una estrecha relación entre el negocio y los &quot;chicos IT&quot; donde ambos desarrollan la solución desde el lugar y skill que ocupan y uno no puede vivir sin el otro. Pareciera una Boda (casamiento) implícita pero hasta que ambos no saben que lo están, las cosas no funcionarán nunca.</p>
<blockquote>
<p><em>¿Qué quiere decir esto?</em></p>
</blockquote>
<p>Quiere decir que generalmente, y por los perfiles que tenemos los informáticos, no es viable en un mismo momento desarrollar soluciones de alta performance de procesamiento distribuido, y adicionalmente también pensar en aspectos monetizables de esa información y el valor para el negocio, ya que los skills son diferentes y se requiere de ambos para poder llegar a buen puerto.</p>
<p>Uno debe ser realista y para poder explicar esto de una forma mas discreta, debemos primero entender los objetivos que perseguimos.<br>
Cuando desplegamos un cluster Big Data, el solo contemplar sus costos asociados, nos lleva a tener una ecuación simple <strong>Cluster = Costo de Inversión entonces debo conseguir valor / retorno de esa inversión</strong>.</p>
<p>Por lo tanto, obtenemos ya dos variables:</p>
<blockquote>
<p><strong>Costo:</strong> Que representa mi inversión realizada en tecnología.</p>
</blockquote>
<blockquote>
<p><strong>Valor:</strong> Que será el retorno de dicho Costo representado no solo en capacidades de procesamiento, sino también en el valor que encuentre en esos datos para hacerlos monetizables.</p>
</blockquote>
<blockquote>
<p>Pero, qué es el valor?</p>
</blockquote>
<p>Definamos valor como la capacidad para generar soluciones de alta performance y alta disponibilidad, que mediante el uso de:</p>
<ul>
<li>Sistemas de procesamiento distribuido.</li>
<li>Grandes volúmenes de datos.</li>
<li>Uso de métodos matemáticos y estadísticos.</li>
<li>Comunicación adecuada de los resultados.</li>
<li>Subjet-matter expertise (experiencia en el área de trabajo)</li>
</ul>
<p>Nos permita alcanzar el objetivo de generar entregables de valor, que mejoren el día a día de las decisiones o bien ayuden a la rentabilidad del negocio.</p>
<p>Entonces, y volviendo a esta &quot;boda (casamiento) implícita&quot;, es inevitable hablar de los actores, ya que de alguna forma, son quienes componen el equipo de trabajo.</p>
<blockquote>
<p>Boda (casamiento) de actores <img src="http://martin-gatto.com/content/images/2018/06/www-tenstickers-co-uk.png" alt="Big data es hablar de equipo."></p>
</blockquote>
<p>En el ecosistema Big Data, encontraremos dos grandes actores:</p>
<p><em>Por un lado los</em> <em><a href="https://en.wikipedia.org/wiki/Data_science">Data Scientist</a></em> quienes buscaran resolver mediante el uso de técnicas descriptivas y estadísticas, el comprender y analizar fenómenos reales con el uso de los datos.</p>
<p><em>Por otro lado</em>, tendremos a los <em><strong>Data Engineers</strong></em>, quienes tendrán el trabajo de capturar, recolectar y procesar grandes volúmenes de datos que sería imposible hacerlo en los medios convencionales de almacenamiento y procesamiento. Aquí entra un atributo importante.... <strong>construir soluciones de alta performance.</strong></p>
<p>Los resultados de valor, deben surgir del equipo y el trabajo de estos dos grupos, donde uno esta más abocado al trabajo de encontrar el valor en los datos, mientras que el otro en construir aplicaciones de alta performance que puedan facilitar dichos trabajos.</p>
<p>Es aquí donde de alguna forma llegamos a una buena definición:</p>
<blockquote>
<p>Los Data Data Scientist (D.S) estan en el camino de descubrir, predecir y describir hechos del mundo real. Para ello es primordial un skill avanzado en matemáticas, estadísticas; y un perfil más bien bajo en desarrollo de software y programación.</p>
</blockquote>
<blockquote>
<p>Los Data Engineers (S.E o D.E) están en el camino orientado a desarrollar aplicaciones de alta performance que puedan procesar, almacenar y mover cantidades inimaginables de datos.</p>
</blockquote>
<blockquote>
<p>El punto en común entre ambos, es que los dos deben poseer un conocimiento avanzado en el área sobre la que se trabajara (Subjet-matter expertise).</p>
</blockquote>
<p>Adicionalmente a esto, existen las marcadas diferencias entre un D.S y un D.E.<br>
Ambas diferencias significan, en la práctica, que los desarrolladores y los científicos de datos a menudo tienen problemas para trabajar juntos. Las prácticas estándares de desarrollo de software no funcionan realmente para el modo de trabajo exploratorio del científico de datos porque las metas son diferentes. La introducción de revisiones de código y una solución ordenada, no funcionaría para los científicos de datos y los ralentizaría. Del mismo modo, la aplicación de este modo exploratorio a los sistemas de producción tampoco funcionará si se busca desarrollar soluciones de alta performance.</p>
<p>Si analizamos el flujo de una arquitectura hasta la visualización de los datos, podremos ver como impactan los skills de uno y de otro:</p>
<p><img src="http://martin-gatto.com/content/images/2018/06/Captura-de-pantalla-2017-09-24-a-la-s--17.33.52-compressor.png" alt="Big data es hablar de equipo."></p>
<p>Observando que en los procesos de captura y procesamiento de la arquitectura, son más requeridos los roles de D.E pero a medida que nos vamos acercando al valor de los datos, son más valiosos los conocimientos de un D.S.</p>
<p>Ahora bien, no hemos resuelto como podemos hacer que estos dos universos tan diferentes jueguen a obtener un mismo objetivo.</p>
<p><img src="http://martin-gatto.com/content/images/2018/06/Captura-de-pantalla-2017-09-24-a-la-s--17.44.50-compressor.png" alt="Big data es hablar de equipo."></p>
<blockquote>
<p><strong>Entonces, ¿cómo podemos estructurar la colaboración para que sea más productiva para ambas partes??</strong></p>
</blockquote>
<h1 id="medianterequerimientosescritosendocumentos">Mediante requerimientos escritos en documentos ?</h1>
<p>Definitivamente creo que esta no es una solución, ya que de alguna forma este método lo que plantea es mantener separados a los equipos y conectados mediante documentación. Ya sabemos como termina esta situación.</p>
<p><img src="http://martin-gatto.com/content/images/2018/06/IMG_2367-compressor.jpg" alt="Big data es hablar de equipo."></p>
<p>Si bien este método puede funcionar (lo dudo mucho en mi humilde opinión), es un método que no resuelve la necesidad de ser ágiles y resolver con velocidad los requerimientos.</p>
<h1 id="uniendoelaguayelaceite">Uniendo el agua y el aceite ?</h1>
<p>Puede que suene un tanto a locura, pero claramente si hablamos de unir, esto debe significar el todo por el todo y sin rodeos.</p>
<p>Existe una gran diferencia de perfiles, pero dentro del mismo departamento de IT lo existe también, solo que desde hace tiempo, observo que esta diferencia se acentúa cuando ademas de esto, existen separaciones organizativas que promueven esta separación (estructura, procesos con &quot;mas documentos&quot;, etc).</p>
<p>Los componentes de este sistema (los D.S y los D.E) son muy diferentes, pero por separado, esto sería un sistema donde un engranaje no hace girar al otro.</p>
<p><img src="http://martin-gatto.com/content/images/2018/06/Captura-de-pantalla-2017-09-25-a-la-s--20.54.23-compressor.png" alt="Big data es hablar de equipo."></p>
<p>Creo que cada vez es más común, y viendo el mercado en mi entorno lo confirmo, que estos perfiles comenzarán con el correr del tiempo a tener más cosas en común (D.S con conocimientos de software y D.E con más conocimientos en métodos matemáticos).<br>
Adicionalmente a esto, y ya siendo un hecho visible en muchas de las grandes compañías, ya existe en el &quot;C&quot; level de la estructura la figura del Chief Data Officer, y los equipos de Data Scientist y Data Engineer conviven bajo un mismo grupo de trabajo y no como entes separados.</p>
<p>Es imposible pensar en que una pieza del motor mueva a la otra, sin que una de ellas se encuentre dentro del mismo motor. El equipo de datos de una compañía, debe ser uno y no compuesto en mi humilde opinión por partes de esta.</p>
<p>Muchas gracias.</p>
<p>Martin.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Data Lake is more than ‘dump your data here’.]]></title><description><![CDATA[When we talk about "Big Data" we really talk about big problems if we don't do in a good way our work.]]></description><link>http://martin-gatto.com/data-lakes-dr/</link><guid isPermaLink="false">5b16c50f53de814f8b51b017</guid><category><![CDATA[Big Data]]></category><category><![CDATA[BigDataArchitecture]]></category><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Wed, 04 Oct 2017 02:57:15 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2017/09/oie_920156Ab8lppGz.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="http://martin-gatto.com/content/images/2017/09/oie_920156Ab8lppGz.png" alt="Data Lake is more than ‘dump your data here’."><p>Last week I was reading the Hortonworks Carrers page and I seen something interesting in the job´s requirements.<br>
The description has some points related with job´s requirements, but one key requirement take my attention....&quot;Data Lake is more than 'dump your data here'  &quot;.</p>
<p>I thought about it for a second and the need to write about this won, because I think this is an interesting problem to analyze if I compare this concept with the big data evangelists message... &quot;put everything here and next we'll see what can we do with it....&quot;.</p>
<blockquote>
<p>The first point is.... THIS IS TRUE MY FRIENDS, data lake is not only <strong>'dump your data here'</strong></p>
</blockquote>
<p>When we talk about <strong>Big Data</strong> we really talk about big problems if we don't do our work in a good way.</p>
<p><img src="http://martin-gatto.com/content/images/2017/09/440px-WasteFinalDeposited.jpg" alt="Data Lake is more than ‘dump your data here’."></p>
<p>Look at this image.... I think this is a real data lake when you don't think in how can you archive your data to use in the future ( &quot;put everything here and next we'll see what can we do with it....&quot; ). Tell me, can you imagine  find something here when you need something specific?.</p>
<p>The data discovery processes executed by data Scientists need something from the Data Engineers, these &quot;things&quot; are the order and the capacity to make the data understandable to others.</p>
<p><img src="http://martin-gatto.com/content/images/2018/06/Abren-planta-reciclar-basura-portena_IECIMA20130103_0004_7.jpg" alt="Data Lake is more than ‘dump your data here’."></p>
<p>Please, just take a look to this two previous pics. You have a very big mountain (or lake) of trash, and in the next pic you have a very big quantities of trash ordered,  separated and classified.<br>
Now, you have to think on those pics like a data lake, and think in the processes you'll have to make to find something in those two scenaries. It`s true, we prefer maybe the scenari where we have our data working for us in order because &quot;the data lake is no only 'dump your data here' (second pic).</p>
<p>So.. where I want to go with this and what's the solution for this problem?.</p>
<blockquote>
<p>Solution = Order, data consistency and grouping of data by domain</p>
</blockquote>
<p><img src="http://martin-gatto.com/content/images/2018/06/Wait-What.jpg" alt="Data Lake is more than ‘dump your data here’."></p>
<p>In the companies, we have our systems built around to architecture frameworks. This Architecture Frameworks, makes rules and methodologies for systems, processes, data and integrations.</p>
<p>For example: Telcos have the TM Forum Framework where you can look standards for applications, solutiones, integrations and data.</p>
<p>The Information Framework (SID) talk about data levels. The level One, it´s the  SID domain where  group the principal domains: Customer, Services, Products, Resources, Market, etc.....</p>
<p>Each domain, have Abe´s (customer, customerBill, CutomerOrder) and this abe´s has data entityes grouped to represent an specific representation of the data domain.</p>
<p><img src="http://martin-gatto.com/content/images/2018/06/sid_model_2013.png" alt="Data Lake is more than ‘dump your data here’."></p>
<p>Let's apply this knowledge to a data lake, and may be we can use this metodology to make more efficient our data discovery processes but before, make more efficient our data archiving processes:</p>
<p>For example:</p>
<ol>
<li>Maybe you have to archive data about your Products, so you can save this data in the &quot;product&quot; data base if you are working with hive or impala.</li>
<li>If you need to archive data from your billing, you have to use de Customer data base, because this information is related with de customer account data, and the customer data.</li>
<li>But... what happen if I need to archive the data from logs and my services´s usage?. This information is related with the resource usage, so we have a place for this data too in the framework.</li>
</ol>
<p><img src="http://martin-gatto.com/content/images/2018/06/Captura-de-pantalla-2017-10-03-a-la-s--23.01.17.png" alt="Data Lake is more than ‘dump your data here’."></p>
<p>We also have other benefits if we make more ordered, and grouping our data by domain in the <strong>Data Lakes</strong> :</p>
<ul>
<li><strong>Governance:</strong> The data owners in general are grouped by domains also, so the governance is more easy to manage and define in this case.</li>
<li><strong>Security:</strong>  You can manage your security policies in Ranger (if you are using Hortonworks and hive) more efficiently grouping their policies by groups of domains and tables within each of the data bases.</li>
<li><strong>Clarity in the matter:</strong> You have to know &quot; which domain has the data I need&quot;, so is more easy to find something you need. In the concept &quot;put everything here and next we'll see what can we do with it....&quot; you have a tiring job first by finding what you need and then being able to pull some value out of what you find.</li>
<li><strong>Common Language:</strong> Not all people in the company who work with the data lakes, will know what you have in this very big place, but is realistic they understand the concepts about the Domains and his definitions.</li>
<li><strong>Business Processes and Applications:</strong> Could be a good way to understand, how the data lake has a relation with the company's processes and the applications that support this processes and the data inside this apps.</li>
</ul>
<p>Thanks :)</p>
<p>King Regards</p>
<p>Martin.</p>
</div>]]></content:encoded></item><item><title><![CDATA[I introduce myself....]]></title><description><![CDATA[Just an introduction]]></description><link>http://martin-gatto.com/me-presento/</link><guid isPermaLink="false">5b16c50f53de814f8b51b016</guid><category><![CDATA[Varios]]></category><dc:creator><![CDATA[Martin Gatto]]></dc:creator><pubDate>Tue, 26 Sep 2017 00:39:30 GMT</pubDate><media:content url="http://martin-gatto.com/content/images/2021/01/hi2.gif" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><h1 id="mymotivation"><em><strong>My Motivation:</strong></em></h1>
<img src="http://martin-gatto.com/content/images/2021/01/hi2.gif" alt="I introduce myself...."><p>The motivation of this article, is a presentation, but I must admit that the spirit of community, led me in some way to write to share, write to learn and share to help, that is, to help someone else.</p>
<p>I am working in this activity, more than 13 years of an insatiable hunger for knowledge and experiences that have given me so much in technology, data and personal aspects of course.<br>
I worked with Business Intelligence architectures, Big Data architectures, Oss and Bss Architectures, different technologies and I learned a lot from all that, even from mistakes.</p>
<p>For about 6 years, I have been venturing into the world of Architecture. World in which I had to transform myself and make things that I never thought I would do technologically. That insatiable hunger for knowledge, I think Architecture transformed it in some way into infinite (Java, Python, Scala, Streaming, Hbase, Hadoop, Networking, structured data, unstructured data, Apache Spark, Apache Hive, Hortonworks, Kafka, data architecture, Enterprise architecture and much more).</p>
<blockquote>
<p>community ....</p>
</blockquote>
<p><img src="http://martin-gatto.com/content/images/2018/06/Captura-de-pantalla-2018-06-05-a-las-20.56.48.png" alt="I introduce myself...."></p>
<h1 id="whoami"><em><strong>Who Am I?:</strong></em></h1>
<p>My name is <strong>Martin</strong> and in spite of how funny it may sound, my last name is <strong>Gatto</strong>, yes ... like the animal (in Spanish) but with a double 't'.</p>
<p>For more than 13 years I have worked in the world of data and technology, I started as a programmer of integration processes and I continued my career with great enthusiasm through everything related to the IT architecture.</p>
<p>During those 13 years, I went through many sectors and roles, working as a developer, technical leader and when I thought there was no more to learn I meet the ARCHITECTURE WORLD and I found in it a totally new and exciting world.</p>
<p>I learned to live with new problems, to think about solutions and the implications that the design of these has. I learned that every day you can think, design and look for new things to discover. The architecture FRAMEWORKS are a real suffering for the Brain and very very disgustings, but extremely valuables.</p>
<p>All that, led me to write this blog... the reason... share, help, write to don´t forget and use this channel to help another people.</p>
<p>Tks :)</p>
<p>Martin Gatto</p>
</div>]]></content:encoded></item></channel></rss>