<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Nazmul Hossain]]></title><description><![CDATA[Just try, learn, share and repeat :)]]></description><link>https://nhossaincse.com/</link><generator>Ghost 5.3</generator><lastBuildDate>Wed, 06 May 2026 10:58:14 GMT</lastBuildDate><atom:link href="https://nhossaincse.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[IoT Database Design Considerations (PostgreSQL)]]></title><description><![CDATA[<p>Handling IoT (Internet of Things) data in PostgreSQL involves designing a database schema that accommodates the unique characteristics of IoT data. Here are some general guidelines and considerations, along with a basic ER (Entity-Relationship) diagram for an IoT database.</p><h3 id="guidelines-for-handling-iot-data-in-postgresql"><strong>Guidelines for Handling IoT Data in PostgreSQL:</strong></h3><ol><li><strong>Identify IoT Devices:</strong></li></ol><ul><li>Create</li></ul>]]></description><link>https://nhossaincse.com/iot-database-design-considerations-postgresql/</link><guid isPermaLink="false">6566f6ac4d37c40001a71479</guid><category><![CDATA[IoT]]></category><category><![CDATA[bigdata]]></category><category><![CDATA[Database]]></category><category><![CDATA[PostgreSQL]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Sat, 09 Sep 2023 09:36:00 GMT</pubDate><media:content url="https://nhossaincse.com/content/images/2023/11/IoT_Data.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://nhossaincse.com/content/images/2023/11/IoT_Data.jpg" alt="IoT Database Design Considerations (PostgreSQL)"><p>Handling IoT (Internet of Things) data in PostgreSQL involves designing a database schema that accommodates the unique characteristics of IoT data. Here are some general guidelines and considerations, along with a basic ER (Entity-Relationship) diagram for an IoT database.</p><h3 id="guidelines-for-handling-iot-data-in-postgresql"><strong>Guidelines for Handling IoT Data in PostgreSQL:</strong></h3><ol><li><strong>Identify IoT Devices:</strong></li></ol><ul><li>Create a table to store information about the IoT devices (e.g., device_id, device_type, manufacturer, etc.).</li></ul><pre><code class="language-sql">CREATE TABLE devices (
    device_id SERIAL PRIMARY KEY,
    device_type VARCHAR(50),
    manufacturer VARCHAR(50),
    -- Other device attributes
);

</code></pre><p><strong>2. Capture Sensor Data:</strong></p><ul><li>Create a table to store sensor data readings. This table can be partitioned by time to improve query performance over time-series data.</li></ul><pre><code class="language-sql">CREATE TABLE sensor_data (
    reading_id SERIAL PRIMARY KEY,
    device_id INT REFERENCES devices(device_id),
    timestamp TIMESTAMP,
    sensor_type VARCHAR(50),
    value NUMERIC,
    -- Other sensor data attributes
);

</code></pre><p><strong>3. Location Tracking:</strong></p><ul><li>If your IoT devices have location data, include a table to store this information.</li></ul><pre><code class="language-sql">CREATE TABLE device_location (
    location_id SERIAL PRIMARY KEY,
    device_id INT REFERENCES devices(device_id),
    timestamp TIMESTAMP,
    latitude NUMERIC,
    longitude NUMERIC,
    -- Other location attributes
);

</code></pre><p><strong>4. Event Logging:</strong></p><ul><li>Capture events and logs related to device activities.</li></ul><pre><code class="language-sql">CREATE TABLE device_events (
    event_id SERIAL PRIMARY KEY,
    device_id INT REFERENCES devices(device_id),
    timestamp TIMESTAMP,
    event_type VARCHAR(50),
    -- Other event attributes
);

</code></pre><p><strong>5. Security and Authentication:</strong></p><ul><li>If security is a concern, implement a table for user authentication and authorization.</li></ul><pre><code class="language-sql">CREATE TABLE users (
    user_id SERIAL PRIMARY KEY,
    username VARCHAR(50) UNIQUE,
    password_hash VARCHAR(255),
    -- Other user attributes
);

</code></pre><p><strong>6. Relationships:</strong></p><ul><li>Establish relationships between tables using foreign keys to maintain data integrity.</li></ul><h3 id="ideal-er-diagram"><strong>Ideal ER Diagram:</strong></h3><p>In the diagram:</p><ul><li>Each box represents a table.</li><li>Lines with diamonds indicate relationships, and the cardinality is marked on each side (e.g., 1:M for one-to-many relationships).</li><li>Primary keys are underlined.</li><li>Foreign keys are denoted with an arrow pointing to the referenced table.</li></ul><h1 id="handling-large-volumes-of-data-and-ensuring-that-queries-remain-performant-over-time">Handling large volumes of data and ensuring that queries remain performant over time</h1><p>Handling large volumes of data and ensuring that queries remain performant over time is a common challenge in database management, especially for IoT applications with continuous data streams. Here are some strategies to address this issue:</p><p><strong>Data Partitioning:</strong></p><ul><li>Partition large tables into smaller, more manageable pieces based on a chosen partition key, such as time or device ID. This helps distribute data across multiple storage locations, making queries on a subset of the data more efficient.</li></ul><pre><code class="language-sql">CREATE TABLE sensor_data (
    reading_id SERIAL PRIMARY KEY,
    device_id INT REFERENCES devices(device_id),
    timestamp TIMESTAMP,
    sensor_type VARCHAR(50),
    value NUMERIC,
    -- Other sensor data attributes
) PARTITION BY RANGE (timestamp);

</code></pre><p><strong>Indexing:</strong></p><ul><li>Proper indexing is crucial for fast query performance. Identify columns frequently used in WHERE clauses and create indexes on those columns. However, be cautious not to over-index, as it can impact write performance.</li></ul><pre><code class="language-sql">CREATE INDEX idx_timestamp ON sensor_data(timestamp);
CREATE INDEX idx_device_id ON sensor_data(device_id);

</code></pre><p><strong>Materialized Views:</strong></p><ul><li>Use materialized views for pre-aggregated or pre-joined data that is queried frequently. This can significantly improve query response times by avoiding the need to perform complex calculations on the fly.</li></ul><pre><code class="language-sql">CREATE MATERIALIZED VIEW daily_sensor_avg AS
SELECT
    device_id,
    date_trunc(&apos;day&apos;, timestamp) AS day,
    AVG(value) AS avg_value
FROM
    sensor_data
GROUP BY
    device_id, day;

</code></pre><p><strong>Archiving and Purging:</strong></p><ul><li>Regularly archive or purge old data that is no longer needed. This reduces the volume of data in the active database and can improve query performance.</li></ul><pre><code class="language-sql">DELETE FROM sensor_data WHERE timestamp &lt; &apos;2022-01-01&apos;;

</code></pre><p><strong>Compression:</strong></p><ul><li>Use compression for large tables, especially if storage space is a concern. PostgreSQL supports various compression techniques that can help reduce the physical size of the database.</li></ul><pre><code class="language-sql">CREATE TABLE sensor_data_compressed AS
SELECT * FROM sensor_data
WITH (FILLFACTOR = 90, COMPRESSDATA = &apos;ON&apos;);

</code></pre><p><strong>Regular Database Maintenance:</strong></p><ul><li>Perform regular database maintenance tasks, such as vacuuming and analyzing, to optimize the performance of the PostgreSQL database.</li></ul><pre><code class="language-sql">VACUUM ANALYZE;

</code></pre><p><strong>Scaling:</strong></p><ul><li>Consider scaling horizontally by using techniques such as sharding or distributed databases to distribute the load across multiple servers.</li></ul><pre><code class="language-sql">-- Example of sharding with Citus extension
SELECT create_distributed_table(&apos;sensor_data&apos;, &apos;device_id&apos;);

</code></pre><p><strong>Caching:</strong></p><ul><li>Implement caching mechanisms to store and retrieve frequently accessed data in memory, reducing the need to query the database for the same data repeatedly.</li></ul><p><strong>Query Optimization:</strong></p><ul><li>Regularly review and optimize your queries. Use the <strong><code>EXPLAIN</code></strong> command to analyze query plans and identify areas for improvement.</li></ul><pre><code class="language-sql">EXPLAIN SELECT * FROM sensor_data WHERE device_id = 123;

</code></pre><p>By implementing these strategies, you can manage large volumes of data in PostgreSQL while ensuring that queries remain performant over time.</p><h1 id="storing-sensor-data-at-a-rate-of-30000-rows-per-minute-and-keeping-the-last-5-years-of-data">Storing sensor data at a rate of 30,000 rows per minute and keeping the last 5 years of data</h1><p>storing sensor data at a rate of 30,000 rows per minute and keeping the last multiple years of data, efficiently managing the <strong><code>sensor_data</code></strong> table becomes crucial. Here are precise recommendations tailored to this scenario:</p><p><strong>Data Partitioning by Time:</strong></p><ul><li>Partition the <strong><code>sensor_data</code></strong> table by time, with a partition key based on the timestamp. This allows for quick retrieval of data within specific time ranges.</li></ul><pre><code class="language-sql">CREATE TABLE sensor_data (
    reading_id SERIAL PRIMARY KEY,
    device_id INT REFERENCES devices(device_id),
    timestamp TIMESTAMP,
    sensor_type VARCHAR(50),
    value NUMERIC,
    -- Other sensor data attributes
) PARTITION BY RANGE (timestamp);

-- Create partitions for each year
CREATE TABLE sensor_data_2023 PARTITION OF sensor_data
    FOR VALUES FROM (&apos;2023-01-01&apos;) TO (&apos;2024-01-01&apos;);

</code></pre><p><strong>Indexes:</strong></p><ul><li>Create indexes on columns commonly used in queries, such as the <strong><code>timestamp</code></strong> and <strong><code>device_id</code></strong>.</li></ul><pre><code class="language-sql">CREATE INDEX idx_timestamp ON sensor_data(timestamp);
CREATE INDEX idx_device_id ON sensor_data(device_id);

</code></pre><p><strong>Archiving and Purging:</strong></p><ul><li>Regularly archive or purge old data to keep the active dataset manageable. In this case, you can archive or delete data older than 5 years.</li></ul><pre><code class="language-sql">DELETE FROM sensor_data WHERE timestamp &lt; NOW() - INTERVAL &apos;5 years&apos;;

</code></pre><p><strong>Compression:</strong></p><ul><li>Consider using table compression to save storage space, especially if the dataset is large.</li></ul><pre><code class="language-sql">CREATE TABLE sensor_data_compressed AS
SELECT * FROM sensor_data
WITH (FILLFACTOR = 90, COMPRESSDATA = &apos;ON&apos;);

</code></pre><p><strong>Table Maintenance:</strong></p><ul><li>Perform routine maintenance tasks, such as vacuuming and analyzing, to optimize the performance of the <strong><code>sensor_data</code></strong> table.</li></ul><pre><code class="language-sql">VACUUM ANALYZE sensor_data;

</code></pre><p><strong>Adjust Autovacuum Settings:</strong></p><ul><li>Tune autovacuum settings to suit the write-intensive nature of your IoT application. Ensure that autovacuum is adequately managing dead rows and reclaiming space.</li></ul><pre><code class="language-sql">-- Adjust autovacuum settings in postgresql.conf or dynamically
autovacuum_vacuum_scale_factor = 0.1
autovacuum_analyze_scale_factor = 0.05

</code></pre><p><strong>Consider Data Aggregation:</strong></p><ul><li>Depending on the nature of queries, consider aggregating data at a coarser granularity (e.g., hourly or daily averages) and storing this aggregated data separately. This can significantly speed up certain types of queries.</li></ul><pre><code class="language-sql">CREATE TABLE hourly_sensor_avg AS
SELECT
    device_id,
    date_trunc(&apos;hour&apos;, timestamp) AS hour,
    AVG(value) AS avg_value
FROM
    sensor_data
GROUP BY
    device_id, hour;

</code></pre><h1 id="data-partitioning-is-a-database-design-technique">Data partitioning is a database design technique</h1><p>Data partitioning is a database design technique where a large table is divided into smaller, more manageable pieces called partitions. Each partition contains a subset of the data based on a specific range or condition. In the context of IoT data, which often involves a time series, partitioning by time is a common and effective strategy.</p><p>Let&apos;s break down the provided SQL code for partitioning the <strong><code>sensor_data</code></strong> table by time:</p><pre><code class="language-sql">CREATE TABLE sensor_data (
    reading_id SERIAL PRIMARY KEY,
    device_id INT REFERENCES devices(device_id),
    timestamp TIMESTAMP,
    sensor_type VARCHAR(50),
    value NUMERIC,
    -- Other sensor data attributes
) PARTITION BY RANGE (timestamp);

</code></pre><p>This SQL code creates the main table <strong><code>sensor_data</code></strong> with the following characteristics:</p><ul><li><strong><code>reading_id</code></strong>: A serial column serving as the primary key.</li><li><strong><code>device_id</code></strong>: An integer column referencing the <strong><code>device_id</code></strong> in the <strong><code>devices</code></strong> table.</li><li><strong><code>timestamp</code></strong>: A timestamp column representing the time of the sensor reading.</li><li><strong><code>sensor_type</code></strong>: A varchar column for the type of sensor.</li><li><strong><code>value</code></strong>: A numeric column for the sensor reading value.</li><li>Other sensor data attributes can be added as needed.</li></ul><p>The crucial part here is <strong><code>PARTITION BY RANGE (timestamp)</code></strong>. This line specifies that the table will be partitioned based on the <strong><code>timestamp</code></strong> column, and the partitions will be defined using a range.</p><p>Now, to create partitions for specific time ranges (in this example, each year), the following code is used:</p><pre><code class="language-sql">-- Create a partition for the year 2023
CREATE TABLE sensor_data_2023 PARTITION OF sensor_data
    FOR VALUES FROM (&apos;2023-01-01&apos;) TO (&apos;2024-01-01&apos;);

</code></pre><p>Here, a new table <strong><code>sensor_data_2023</code></strong> is created as a partition of the main <strong><code>sensor_data</code></strong> table. The <strong><code>FOR VALUES FROM (&apos;2023-01-01&apos;) TO (&apos;2024-01-01&apos;)</code></strong> clause specifies that this partition will store data for the time range from January 1, 2023, to January 1, 2024.</p><p>By creating separate partitions for each year, you can efficiently manage and query data for specific time ranges. This design optimizes performance when retrieving data within a particular timeframe, as the database engine knows where to look based on the partitioning key (timestamp), reducing the amount of data that needs to be scanned.</p><h3 id="managing-subsequent-year-partitions"><strong>Managing Subsequent Year Partitions:</strong></h3><p><strong>Create a Template for Partition Creation:</strong></p><ul><li>Create a template SQL script that you can reuse for creating new yearly partitions. This template should include the necessary SQL statements to create a new partition for a specific year.</li></ul><pre><code class="language-sql">-- Template for creating yearly partition
CREATE TABLE sensor_data_&lt;year&gt; PARTITION OF sensor_data
    FOR VALUES FROM (&apos;&lt;year&gt;-01-01&apos;) TO (&apos;&lt;year + 1&gt;-01-01&apos;);

</code></pre><p><strong>Create Partitions for Subsequent Years:</strong></p><ul><li>As each new year begins, execute the template script with the appropriate year values to create partitions for subsequent years.</li></ul><pre><code class="language-sql">-- Create a partition for the year 2024
CREATE TABLE sensor_data_2024 PARTITION OF sensor_data
    FOR VALUES FROM (&apos;2024-01-01&apos;) TO (&apos;2025-01-01&apos;);

</code></pre><ul><li>Repeat this step for each new year.</li></ul><p><strong>Drop Old Partitions (Optional):</strong></p><ul><li>Depending on your data retention policy, you may want to drop partitions for years that are no longer relevant. Be cautious when doing this to avoid data loss.</li></ul><pre><code class="language-sql">-- Drop the partition for the year 2018 (example)
DROP TABLE sensor_data_2018;

</code></pre><p><strong>Automation with a Script:</strong></p><ul><li>To streamline the process, consider writing a script or using a scheduling tool that automates the creation of new yearly partitions based on the current date.</li><li>Example (pseudo-code):</li></ul><pre><code class="language-bash">bashCopy code
current_year = get_current_year()
next_year = current_year + 1

sql_script = generate_sql_script(template_script, current_year, next_year)

execute_sql_script(sql_script)

</code></pre><ul><li>Such a script can be scheduled to run at the beginning of each year.</li></ul><h3 id="important-considerations"><strong>Important Considerations:</strong></h3><ul><li><strong>Data Retention Policy:</strong></li><li>Ensure that you have a clear data retention policy in place. Dropping old partitions is a way to manage storage, but make sure it aligns with your business requirements.</li><li><strong>Testing:</strong></li><li>Before implementing any changes, thoroughly test the process in a staging environment to avoid unintended consequences in the production database.</li><li><strong>Monitoring:</strong></li><li>Implement monitoring to keep track of partition sizes, database performance, and any potential issues arising from the manual management process.</li><li><strong>Review and Update:</strong></li><li>Periodically review and update your partitioning strategy based on evolving data patterns and business needs.</li></ul><p>Remember, managing partitions manually can become challenging as the number of years grows. If automation is feasible within your environment, it is generally a more sustainable solution.</p><h1 id="in-postgresql-indexes">In PostgreSQL, indexes</h1><p>In PostgreSQL, indexes can be either clustered or non-clustered. Let&apos;s break down the concepts and understand the specific case of the <strong><code>sensor_data</code></strong> table.</p><h3 id="non-clustered-indexes"><strong>Non-Clustered Indexes:</strong></h3><p><strong>Timestamp Index:</strong></p><pre><code class="language-sql">CREATE INDEX idx_timestamp ON sensor_data(timestamp);

</code></pre><p>This statement creates a non-clustered index named <strong><code>idx_timestamp</code></strong> on the <strong><code>timestamp</code></strong> column of the <strong><code>sensor_data</code></strong> table. Non-clustered indexes store a separate data structure that contains a mapping between the indexed values (timestamps, in this case) and the corresponding rows in the table. They do not alter the physical order of the table itself.</p><p><strong>Device_ID Index:</strong></p><pre><code class="language-sql">CREATE INDEX idx_device_id ON sensor_data(device_id);

</code></pre><p>Similarly, this statement creates a non-clustered index named <strong><code>idx_device_id</code></strong> on the <strong><code>device_id</code></strong> column. This index is useful when queries involve filtering or sorting based on the <strong><code>device_id</code></strong>.</p><h3 id="clustered-indexes"><strong>Clustered Indexes:</strong></h3><p>In PostgreSQL, the term &quot;clustered index&quot; is used differently than in some other database systems. In PostgreSQL, there is a feature called the &quot;cluster&quot; command that physically reorders the rows in a table based on the order of an index. This is not the same as a traditional clustered index in some other databases.</p><p>For example, you might use the <strong><code>CLUSTER</code></strong> command like this:</p><pre><code class="language-sql">CLUSTER sensor_data USING idx_timestamp;

</code></pre><p>This command reorganizes the <strong><code>sensor_data</code></strong> table on disk so that the rows are physically stored in the order defined by the <strong><code>idx_timestamp</code></strong> index. This operation can improve the performance of range queries on the timestamp column.</p><h3 id="no-default-clustered-index"><strong>No Default Clustered Index:</strong></h3><p>It&apos;s important to note that PostgreSQL does not have a default clustered index for tables. When you create an index, it is non-clustered by default. The decision to cluster a table based on an index is explicit and needs to be done using the <strong><code>CLUSTER</code></strong> command.</p><h3 id="considerations"><strong>Considerations:</strong></h3><ul><li><strong>Cluster Command Use:</strong></li><li>Using the <strong><code>CLUSTER</code></strong> command can be resource-intensive and should be done carefully, especially in production environments. It involves temporarily taking the table offline during the clustering process.</li><li><strong>Regular Index Use:</strong></li><li>In many cases, non-clustered indexes (like those created with the <strong><code>CREATE INDEX</code></strong> statements) are sufficient for optimizing query performance. Consider using these indexes unless you have specific reasons to cluster the table.</li><li><strong>Index Maintenance:</strong></li><li>Regularly monitor and maintain your indexes. PostgreSQL&apos;s autovacuum process helps manage the health of indexes over time.</li></ul><p>Remember that the decision to use clustered indexes depends on the specific requirements and usage patterns of your application. It&apos;s recommended to analyze query performance, evaluate the impact of clustering on your specific workload, and make decisions accordingly.</p><h1 id="fragmentation-in-the-context-of-a-database">Fragmentation in the context of a database</h1><p>Fragmentation in the context of a database generally refers to the phenomenon where data is scattered or dispersed in a non-contiguous manner, leading to suboptimal performance. There are two main types of fragmentation: internal fragmentation and external fragmentation.</p><h3 id="1-internal-fragmentation"><strong>1. Internal Fragmentation:</strong></h3><ul><li><strong>Definition:</strong> Internal fragmentation occurs within data structures, such as tables or indexes, when space is allocated but not fully utilized. This can happen due to variable-length data types, padding, or inefficient storage allocation.</li><li><strong>Example:</strong> In the context of a database, internal fragmentation might occur when variable-length columns are used, and the actual data does not fully utilize the allocated space.</li><li><strong>How to Handle Internal Fragmentation:</strong></li><li>Regularly perform maintenance tasks like vacuuming or rebuilding indexes. This helps reclaim unused space within the data structures.</li></ul><h3 id="2-external-fragmentation"><strong>2. External Fragmentation:</strong></h3><ul><li><strong>Definition:</strong> External fragmentation occurs when the physical storage of data becomes non-contiguous, leading to inefficiencies in data access. This can happen when data is deleted or updated, and free space becomes fragmented across different disk locations.</li><li><strong>Example:</strong> In a table, if rows are frequently deleted or updated, the free space left by these operations might be scattered across various data pages, causing external fragmentation.</li><li><strong>How to Handle External Fragmentation:</strong></li><li><strong>Reorganize Data:</strong> Periodically reorganize or rebuild tables and indexes to consolidate fragmented free space.</li><li><strong>Use Indexes Wisely:</strong> Well-designed indexes can reduce the impact of external fragmentation by providing efficient access paths to data.</li></ul><h3 id="handling-fragmentation-in-postgresql"><strong>Handling Fragmentation in PostgreSQL:</strong></h3><p><strong>Vacuuming:</strong></p><ul><li>PostgreSQL uses a process called vacuuming to manage internal and external fragmentation. The <strong><code>VACUUM</code></strong> command reclaims storage occupied by dead rows and ensures that space is used efficiently.</li></ul><pre><code class="language-sql">VACUUM;

</code></pre><p><strong>Auto-vacuum:</strong></p><ul><li>PostgreSQL has an auto-vacuum process that runs automatically in the background. It helps manage internal and external fragmentation without manual intervention. Ensure that auto-vacuum is appropriately configured.</li></ul><p><strong>Rebuilding Indexes:</strong></p><ul><li>If you observe significant index fragmentation, consider rebuilding indexes. This can be done using the <strong><code>REINDEX</code></strong> command.</li></ul><pre><code class="language-sql">REINDEX INDEX idx_example;

</code></pre><p><strong>Cluster Command:</strong></p><ul><li>The <strong><code>CLUSTER</code></strong> command can be used to physically reorder the table based on an index. This can help reduce both internal and external fragmentation, but it requires exclusive access to the table.</li></ul><pre><code class="language-sql">CLUSTER my_table USING idx_example;

</code></pre><p><strong>Analyze:</strong></p><ul><li>Regularly analyze the performance of your database, identify tables or indexes with high fragmentation, and take appropriate corrective actions.</li></ul><p>Remember that managing fragmentation is an ongoing process, and the specific strategies you employ may depend on the workload, data patterns, and maintenance requirements of your PostgreSQL database.</p><h1 id="auto-vacuum-in-postgresql">Auto-vacuum in PostgreSQL</h1><p>Auto-vacuum in PostgreSQL is a background process that helps manage the storage and performance of the database by reclaiming space occupied by dead rows and updating statistics. In most PostgreSQL installations, auto-vacuum is enabled by default. However, you may need to adjust its configuration settings to better suit your database workload.</p><p>Here&apos;s how you can configure auto-vacuum in PostgreSQL:</p><h3 id="check-auto-vacuum-status"><strong>Check Auto-vacuum Status:</strong></h3><p>Before making changes, you can check the current status of auto-vacuum in your PostgreSQL database:</p><pre><code class="language-sql">OW autovacuum;
SHOW autovacuum_vacuum_scale_factor;
SHOW autovacuum_analyze_scale_factor;
SHOW autovacuum_vacuum_cost_limit;

</code></pre><h3 id="adjust-auto-vacuum-configuration"><strong>Adjust Auto-vacuum Configuration:</strong></h3><p><strong>Enable/Disable Auto-vacuum:</strong></p><p>By default, auto-vacuum is enabled. If, for some reason, it&apos;s disabled, you can enable it with the following command:</p><pre><code class="language-sql">ALTER SYSTEM SET autovacuum = on;

</code></pre><p>If you need to disable auto-vacuum (not recommended in most cases), you can use:</p><pre><code class="language-sql">ALTER SYSTEM SET autovacuum = off;

</code></pre><p><strong>Adjust Scale Factors:</strong></p><p>Auto-vacuum has scale factors that determine when it should run based on the number of updated or inserted rows. You can adjust these factors to fine-tune the auto-vacuum behavior.</p><pre><code class="language-sql">
-- Adjust the scale factor for vacuum
ALTER SYSTEM SET autovacuum_vacuum_scale_factor = 0.1;

-- Adjust the scale factor for analyze
ALTER SYSTEM SET autovacuum_analyze_scale_factor = 0.05;

</code></pre><p>The scale factors control the fraction of the table that needs to be modified before auto-vacuum is triggered.</p><p><strong>Set Vacuum Cost Limit:</strong></p><p>The <strong><code>autovacuum_vacuum_cost_limit</code></strong> parameter controls how much work is performed by a single vacuuming process before it sleeps to avoid impacting other database activities.</p><pre><code class="language-sql">-- Set the vacuum cost limit
ALTER SYSTEM SET autovacuum_vacuum_cost_limit = 2000;

</code></pre><p>Adjust this value based on your system&apos;s resources and workload.</p><p><strong>Reload Configuration:</strong></p><p>After making changes to the configuration, you need to reload the configuration to apply the new settings:</p><pre><code class="language-sql">SELECT pg_reload_conf();

</code></pre><h3 id="monitoring-auto-vacuum"><strong>Monitoring Auto-vacuum:</strong></h3><p>You can monitor auto-vacuum activity using the following system views:</p><p><strong>pg_stat_user_tables:</strong></p><ul><li>Displays information about tables.</li></ul><pre><code class="language-sql">SELECT * FROM pg_stat_user_tables;

</code></pre><p><strong>pg_stat_user_indexes:</strong></p><ul><li>Displays information about indexes.</li></ul><pre><code class="language-sql">SELECT * FROM pg_stat_user_indexes;

</code></pre><p><strong>pg_stat_bgwriter:</strong></p><ul><li>Displays information about the background writer process.</li></ul><pre><code class="language-sql">SELECT * FROM pg_stat_bgwriter;

</code></pre><p>These views provide insights into the auto-vacuum activity, including the number of rows and pages vacuumed or analyzed.</p><p>Remember that auto-vacuum is generally well-tuned by default, and you may not need to make significant changes unless you have specific performance requirements or challenges in your PostgreSQL environment.</p>]]></content:encoded></item><item><title><![CDATA[Event-Driven Architecture with C#: Harnessing the Power of Event-Based Systems]]></title><description><![CDATA[<h2 id="purpose"><strong>Purpose</strong></h2><p>In the world of software development, event-driven architecture (EDA) has gained significant popularity as a powerful approach to building scalable and decoupled systems. By leveraging events as the primary means of communication between components, EDA enables flexible and loosely coupled architectures that can handle complex workflows and seamlessly integrate</p>]]></description><link>https://nhossaincse.com/event-driven-architecture-with-csharp-harnessing-the-power-of-event-based-systems/</link><guid isPermaLink="false">649bd19dc8533a0001cc5177</guid><category><![CDATA[C#]]></category><category><![CDATA[System-Design]]></category><category><![CDATA[Microservice]]></category><category><![CDATA[Event-Driven-Design]]></category><category><![CDATA[.NET]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Sun, 04 Jun 2023 06:42:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1632303283130-8a2ec7823251?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDZ8fHNvZnR3YXJlJTIwZGVzaWduJTIwcGF0dGVybnxlbnwwfHx8fDE3MDEyNDc1Mzd8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="purpose"><strong>Purpose</strong></h2><img src="https://images.unsplash.com/photo-1632303283130-8a2ec7823251?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDZ8fHNvZnR3YXJlJTIwZGVzaWduJTIwcGF0dGVybnxlbnwwfHx8fDE3MDEyNDc1Mzd8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Event-Driven Architecture with C#: Harnessing the Power of Event-Based Systems"><p>In the world of software development, event-driven architecture (EDA) has gained significant popularity as a powerful approach to building scalable and decoupled systems. By leveraging events as the primary means of communication between components, EDA enables flexible and loosely coupled architectures that can handle complex workflows and seamlessly integrate with various services. In this blog post, we&apos;ll explore the fundamentals of event-driven architecture and demonstrate how you can implement it using C# snippets.</p><h2 id="items-to-cover"><strong>Items to Cover </strong></h2><ol><li>What is Event-Driven Architecture?</li><li>Key Components of Event-Driven Architecture</li><li>Benefits of Event-Driven Architecture</li><li>Implementing the Event-Driven Architecture in C# a. Setting Up the Event Infrastructure b. Defining Events and Event Handlers c. Publishing and Subscribing to Events d. Error Handling and Retry Mechanisms</li><li>Real-World Use Cases for Event-Driven Architecture</li><li>Conclusion</li></ol><h2 id="1-what-is-event-driven-architecture"><strong>1. What is Event-Driven Architecture</strong></h2><p>Event-Driven Architecture is a design pattern where components communicate and react to events occurring within the system. An event represents a significant occurrence or change in the system&apos;s state. Examples of events can include user actions, system events, or external triggers. The primary characteristic of EDA is the decoupling of components, allowing them to function independently and react to events asynchronously.</p><h2 id="2-key-components-of-event-driven-architecture"><strong>2. Key Components of Event-Driven Architecture</strong></h2><ul><li>Event: A significant occurrence or change in the system that triggers a reaction.</li><li>Event Producer: The component responsible for generating events and notifying the system.</li><li>Event Consumer: The component that reacts to events by executing specific actions or processes.</li><li>Event Broker/Bus: A middleware component that handles event publishing, event routing, and event subscriptions.</li><li>Event Handler: A piece of code that defines the logic to be executed when a specific event occurs.</li></ul><h2 id="3-benefits-of-event-driven-architecture"><strong>3. Benefits of Event-Driven Architecture</strong></h2><ul><li>Loose Coupling: Components are decoupled, allowing them to evolve independently.</li><li>Scalability: Event-driven systems can scale horizontally by adding more event consumers.</li><li>Flexibility: New functionalities can be added by simply subscribing to relevant events.</li><li>Asynchronicity: Components can process events independently without blocking each other.</li><li>Resilience: Failure of one component doesn&apos;t impact the entire system.</li><li>Event Traceability: Events provide a comprehensive audit trail of system actions.</li></ul><h2 id="4-implementing-event-driven-architecture-in-c"><strong>4. Implementing Event-Driven Architecture in C#</strong></h2><p>a. Setting Up the Event Infrastructure: To implement event-driven architecture in C#, you&apos;ll need to set up an event infrastructure. This can be achieved using a message broker like RabbitMQ or a pub-sub system like Azure Service Bus.</p><p>b. Defining Events and Event Handlers: Define events as classes or structs in C#. An event handler is a method that gets executed when the corresponding event occurs. For example:</p><pre><code class="language-csharp">public class OrderPlacedEvent
{
    public Guid OrderId { get; set; }
    public DateTime OrderDate { get; set; }
    // Additional properties
}

public class OrderPlacedEventHandler
{
    public void Handle(OrderPlacedEvent @event)
    {
        // Event handling logic
    }
}
</code></pre><p>c. Publishing and Subscribing to Events: To publish an event, you need to instantiate the event class and send it to the event broker or bus. Subscribing to an event involves registering the event handler with the broker or bus. Here&apos;s an example:</p><pre><code class="language-csharp">// Publishing an event
var orderPlacedEvent = new OrderPlacedEvent { OrderId = orderId, OrderDate = DateTime.Now };
eventBus.Publish(orderPlacedEvent);

// Subscribing to an event
eventBus.Subscribe&lt;OrderPlacedEvent&gt;(new OrderPlacedEventHandler().Handle);
</code></pre><p>d. Error Handling and Retry Mechanisms: Implement error handling and retry mechanisms to ensure the resilience and reliability of the event-driven system. Techniques like dead-letter queues, exponential backoff, and circuit breakers can be used to handle failures gracefully.</p><h2 id="5-real-world-use-cases-for-event-driven-architecture"><strong>5. Real-World Use Cases for Event-Driven Architecture</strong></h2><ul><li>E-commerce platforms: Handling order placement, inventory management, and shipping updates.</li><li>Microservices architectures: Enabling communication between services while maintaining loose coupling.</li><li>Internet of Things (IoT) systems: Reacting to sensor data and triggering appropriate actions.</li><li>Real-time analytics: Processing data streams and generating insights in real-time.</li></ul><h2 id="3-conclusion"><strong>3. Conclusion</strong></h2><p>Event-Driven Architecture offers a flexible and scalable approach to building complex systems. By leveraging events and decoupling components, you can achieve resilience, scalability, and maintainability. With C# and the provided code snippets, you can start implementing event-driven architectures and harness the power of event-based systems in your applications.</p><p>Remember, event-driven architecture is a paradigm that requires careful design and consideration of your system&apos;s requirements. When used appropriately, it can lead to highly scalable, responsive, and modular applications.</p><p>Happy event-driven coding in C#!</p>]]></content:encoded></item><item><title><![CDATA[What is Coupling in OOP and why it is helpful?]]></title><description><![CDATA[<h2 id="purpose"><strong>Purpose</strong></h2><p>To describe the Coupling in OOP and its benefits.</p><h2 id="explanation"><strong>Explanation</strong></h2><p>Coupling in object-oriented programming (OOP) refers to the degree of interdependence between classes or modules. It indicates how closely two or more classes are connected or rely on each other.</p><p>Coupling can be categorized into different levels:</p><ol><li>Loose coupling:</li></ol>]]></description><link>https://nhossaincse.com/coupling-in-oop/</link><guid isPermaLink="false">64608144c8533a0001cc5124</guid><category><![CDATA[C#]]></category><category><![CDATA[OOP]]></category><category><![CDATA[.NET]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Sun, 30 Apr 2023 06:42:00 GMT</pubDate><media:content url="https://nhossaincse.com/content/images/2023/05/c-sharp-lang-icon.png" medium="image"/><content:encoded><![CDATA[<h2 id="purpose"><strong>Purpose</strong></h2><img src="https://nhossaincse.com/content/images/2023/05/c-sharp-lang-icon.png" alt="What is Coupling in OOP and why it is helpful?"><p>To describe the Coupling in OOP and its benefits.</p><h2 id="explanation"><strong>Explanation</strong></h2><p>Coupling in object-oriented programming (OOP) refers to the degree of interdependence between classes or modules. It indicates how closely two or more classes are connected or rely on each other.</p><p>Coupling can be categorized into different levels:</p><ol><li>Loose coupling: Classes are minimally dependent on each other, and changes in one class have minimal impact on other classes. Communication between classes is typically achieved through well-defined interfaces or abstractions.</li><li>Moderate coupling: Classes have some level of dependency on each other, but changes in one class may require corresponding modifications in other classes. Communication between classes may involve passing data or invoking methods directly.</li><li>Tight coupling: Classes are highly dependent on each other, and changes in one class can significantly impact or break the functionality of other classes. Communication between classes involves direct references or dependencies on concrete implementations.</li></ol><p>While loose coupling is generally considered beneficial, it is important to strike a balance based on the specific requirements and design of the system. Here are some reasons why loose coupling is helpful:</p><ol><li>Modularity: Loose coupling promotes modularity by allowing classes or modules to be developed and maintained independently. Changes or updates to one module can be made without affecting others, making the system more flexible and easier to manage.</li><li>Code reusability: Loose coupling enables greater code reusability. Well-defined interfaces and abstractions make it easier to replace or extend the implementation of a class without impacting other parts of the system.</li><li>Testability: Loosely coupled classes are easier to test in isolation since they can be mocked or stubbed more effectively. This facilitates unit testing, as individual classes can be tested independently without requiring the presence of all the dependencies.</li><li>Scalability and Parallel Development: Loose coupling supports scalability and parallel development. Different teams or developers can work on different modules simultaneously without stepping on each other&apos;s toes, as long as they adhere to the agreed-upon interfaces and contracts.</li><li>Maintainability: Loose coupling enhances maintainability by reducing the ripple effect of changes. Modifications or bug fixes in one class have minimal impact on other classes, making it easier to understand, modify, and enhance the system over time.</li></ol><h2 id="example-with-c"><strong>Example with C#</strong></h2><p>Here&apos;s an example in C# to illustrate different levels of coupling:</p><pre><code class="language-csharp">// High coupling (Tight coupling)
public class Car
{
    private Engine engine;

    public Car()
    {
        engine = new Engine();
    }

    public void Start()
    {
        engine.Start();
    }
}

public class Engine
{
    public void Start()
    {
        Console.WriteLine(&quot;Engine started.&quot;);
    }
}
</code></pre><p>In the above example, the <code>Car</code> class has a tight coupling with the <code>Engine</code> class. It directly creates an instance of the <code>Engine</code> class within its constructor and invokes the <code>Start</code> method on it. Any changes to the <code>Engine</code> class, such as renaming the method or modifying its behavior, may require corresponding modifications in the <code>Car</code> class.</p><pre><code class="language-csharp">// Moderate coupling
public interface IEngine
{
    void Start();
}

public class Car
{
    private IEngine engine;

    public Car(IEngine engine)
    {
        this.engine = engine;
    }

    public void Start()
    {
        engine.Start();
    }
}

public class Engine : IEngine
{
    public void Start()
    {
        Console.WriteLine(&quot;Engine started.&quot;);
    }
}
</code></pre><p>In this example, the <code>Car</code> class is moderately coupled with the <code>Engine</code> class through the <code>IEngine</code> interface. The <code>Car</code> class depends on the <code>IEngine</code> interface, allowing different implementations of the engine to be injected at runtime. This flexibility allows for easier testing and decouples the <code>Car</code> class from specific implementations of the engine.</p><pre><code class="language-csharp">// Low coupling (Loose coupling)
public interface IEngine
{
    void Start();
}

public class Car
{
    private IEngine engine;

    public Car(IEngine engine)
    {
        this.engine = engine;
    }

    public void Start()
    {
        engine.Start();
    }
}

public class ElectricEngine : IEngine
{
    public void Start()
    {
        Console.WriteLine(&quot;Electric engine started.&quot;);
    }
}

public class CombustionEngine : IEngine
{
    public void Start()
    {
        Console.WriteLine(&quot;Combustion engine started.&quot;);
    }
}
</code></pre><p>In this example, the <code>Car</code> class has low coupling by depending on the <code>IEngine</code> interface, which can be implemented by different engine types. The <code>ElectricEngine</code> and <code>CombustionEngine</code> classes implement the <code>IEngine</code> interface, providing specific engine behaviors. The <code>Car</code> class can work with any engine type as long as it adheres to the <code>IEngine</code> interface, resulting in loose coupling.</p><p>By using interfaces and dependency injection, we can achieve lower levels of coupling, making the system more modular, extensible, and maintainable.</p><h2 id="conclusion"><strong>Conclusion</strong></h2><p>Overall, loose coupling in OOP promotes flexibility, modularity, reusability, testability, and maintainability, contributing to the development of robust and adaptable software systems.</p>]]></content:encoded></item><item><title><![CDATA[Dependency Injection vs Inversion of Control]]></title><description><![CDATA[<h2 id="purpose"><strong>Purpose</strong></h2><p>The purpose of this blog post is to provide an overview of Dependency Injection (DI) and Inversion of Control (IoC) in software development, explain their differences and similarities, and help readers understand how they can use them to improve the design and maintainability of their applications.</p><h2 id="background"><strong>Background</strong></h2><p>In software</p>]]></description><link>https://nhossaincse.com/dependency-injection-vs-inversion-of-control/</link><guid isPermaLink="false">62dd0732c8533a0001cc498f</guid><category><![CDATA[C#]]></category><category><![CDATA[Dependency-Injection]]></category><category><![CDATA[Inversion-Of-Control]]></category><category><![CDATA[.NET]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Thu, 15 Dec 2022 03:24:00 GMT</pubDate><media:content url="https://nhossaincse.com/content/images/2023/05/DI-IoC-2.png" medium="image"/><content:encoded><![CDATA[<h2 id="purpose"><strong>Purpose</strong></h2><img src="https://nhossaincse.com/content/images/2023/05/DI-IoC-2.png" alt="Dependency Injection vs Inversion of Control"><p>The purpose of this blog post is to provide an overview of Dependency Injection (DI) and Inversion of Control (IoC) in software development, explain their differences and similarities, and help readers understand how they can use them to improve the design and maintainability of their applications.</p><h2 id="background"><strong>Background</strong></h2><p>In software development, managing dependencies is an important aspect of creating maintainable and testable code. Both DI and IoC are patterns that can be used to manage dependencies and improve the flexibility and maintainability of applications.</p><h2 id="problem-statement"><strong>Problem statement </strong></h2><p>One common problem in software development is managing complex dependencies between objects and modules. If dependencies are not managed properly, it can be difficult to test and maintain the code, and changes to one module can have unintended consequences on other modules. This can lead to code that is hard to maintain, understand, and extend.</p><h2 id="possible-solutions"><strong>Possible solutions</strong></h2><p><strong>Dependency Injection</strong></p><p>DI is a technique for implementing IoC by injecting dependencies into objects, rather than creating or managing them within the object itself. This makes the dependencies explicit and testable, and it allows them to be easily replaced or updated without modifying the object itself.</p><p>Let&apos;s start with an example of Dependency Injection in C#. Suppose we have a simple application that needs to print a message to the console. We can create an interface <code>IPrinter</code> and an implementation of that interface <code>ConsolePrinter</code> as follows:</p><pre><code class="language-csharp">public interface IPrinter
{
    void Print(string message);
}

public class ConsolePrinter : IPrinter
{
    public void Print(string message)
    {
        Console.WriteLine(message);
    }
}
</code></pre><p>We can then create a class <code>MessageService</code> that depends on an <code>IPrinter</code>, which is injected as a constructor argument:</p><pre><code class="language-csharp">public class MessageService
{
    private readonly IPrinter _printer;

    public MessageService(IPrinter printer)
    {
        _printer = printer;
    }

    public void PrintMessage(string message)
    {
        _printer.Print(message);
    }
}
</code></pre><p>We can now create an instance of <code>MessageService</code> and inject an instance of <code>ConsolePrinter</code>:</p><pre><code class="language-csharp">var messageService = new MessageService(new ConsolePrinter());
messageService.PrintMessage(&quot;Hello, world!&quot;);
</code></pre><p>In this example, we&apos;re injecting an instance of <code>ConsolePrinter</code> into <code>MessageService</code>. If we wanted to use a different implementation of <code>IPrinter</code>, we could simply create a new implementation and inject it instead.</p><p><strong>Inversion of Control</strong> </p><p>IoC is a design principle that states that the control flow of a program should be inverted, so that the framework or container manages the dependencies and control flow of the application, rather than the application managing its own dependencies. This means that the application doesn&apos;t create or manage its own dependencies, but instead relies on an external framework or container to provide them. This makes the application more flexible and easier to modify, because dependencies can be easily swapped out or updated.</p><p>An example of Inversion of Control using a container. We can use a third-party IoC container such as Autofac to manage our dependencies.</p><p>First, we need to register our dependencies with the container:</p><pre><code class="language-csharp">var builder = new ContainerBuilder();
builder.RegisterType&lt;ConsolePrinter&gt;().As&lt;IPrinter&gt;();
builder.RegisterType&lt;MessageService&gt;();
var container = builder.Build();
</code></pre><p>We&apos;re registering <code>ConsolePrinter</code> as an implementation of <code>IPrinter</code>, and <code>MessageService</code> as a type that depends on <code>IPrinter</code>.</p><p>We can now resolve an instance of <code>MessageService</code> from the container:</p><pre><code class="language-csharp">var messageService = container.Resolve&lt;MessageService&gt;();
messageService.PrintMessage(&quot;Hello, world!&quot;);
</code></pre><p>When we call <code>container.Resolve&lt;MessageService&gt;()</code>, Autofac automatically resolves the dependencies of <code>MessageService</code>, which in this case is an instance of <code>ConsolePrinter</code>.</p><p>In this example, we&apos;re using an IoC container to manage our dependencies and control the flow of our application. If we wanted to use a different implementation of <code>IPrinter</code>, we could simply register a different implementation with the container. This makes our application more flexible and easier to modify over time.</p><h2 id="conclusion"><strong>Conclusion</strong></h2><p>Both DI and IoC are powerful patterns that can help improve the flexibility, testability, and maintainability of applications. Which pattern to use depends on the specific requirements and constraints of the application. However, in general, using DI and IoC can lead to better-organized code, improved testability, and easier maintenance and extension of the application over time. It is recommended that developers become familiar with these patterns and incorporate them into their design process.</p>]]></content:encoded></item><item><title><![CDATA[Scrum Overview]]></title><description><![CDATA[<p>Recently I have become a Scrum Master. The workaround of Scrum is very interesting to me. This is an approach to value-oriented project management. </p><blockquote>The comprehensive consolidation of the Scrum aspects is listed in the fantastic presentation platform named Prezi by me. The details can be viewed from the link</blockquote>]]></description><link>https://nhossaincse.com/scrum-overview/</link><guid isPermaLink="false">630615a2c8533a0001cc5032</guid><category><![CDATA[Scrum]]></category><category><![CDATA[agile]]></category><category><![CDATA[project-management]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Wed, 24 Aug 2022 12:19:23 GMT</pubDate><media:content url="https://nhossaincse.com/content/images/2022/08/Scrum-Approach.png" medium="image"/><content:encoded><![CDATA[<img src="https://nhossaincse.com/content/images/2022/08/Scrum-Approach.png" alt="Scrum Overview"><p>Recently I have become a Scrum Master. The workaround of Scrum is very interesting to me. This is an approach to value-oriented project management. </p><blockquote>The comprehensive consolidation of the Scrum aspects is listed in the fantastic presentation platform named Prezi by me. The details can be viewed from the link below:</blockquote><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://prezi.com/view/gWQEFUcIYaE3Y0BZmMSg/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">SCRUM Overview</div><div class="kg-bookmark-description">FRAMEWORK FOR PROJECT MANAGEMENT SCRUM Prepared by: Nazmul Hossain Waterfall vs Scrum Waterfall VS Scrum Development &#x2013; with Scrum we have iterative development and with Waterfall it&#x2019;s sequential Progress &#x2013; with Scrum we see the progress by the delivery of valued features every How</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://assets.prezicdn.net/assets-versioned/prezipage-versioned/4186-1338e09/common/img/prezi-apple-touch-icon.png" alt="Scrum Overview"><span class="kg-bookmark-author">prezi.com</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://0701.static.prezi.com/preview/v2/dvtxosdbhb3hz5rs3qscui2bdl6jc3sachvcdoaizecfr3dnitcq_3_0.png" alt="Scrum Overview"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Microservice Project Architecture - Generic IoT Skeleton]]></title><description><![CDATA[Designing an IoT solution by considering all the scenarios is very important for the long run of the product. Will describe a Microservice-based IoT Solution Architecture in this article.]]></description><link>https://nhossaincse.com/microservice-project-architecture-generic-iot-skeleton/</link><guid isPermaLink="false">62f8ba4bc8533a0001cc4ed7</guid><category><![CDATA[System-Design]]></category><category><![CDATA[IoT]]></category><category><![CDATA[Microservice]]></category><category><![CDATA[Smart City]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Sun, 14 Aug 2022 18:42:44 GMT</pubDate><media:content url="https://nhossaincse.com/content/images/2022/08/IoT_Agriculture.jpeg" medium="image"/><content:encoded><![CDATA[<h2 id="purpose"><strong>Purpose </strong></h2><img src="https://nhossaincse.com/content/images/2022/08/IoT_Agriculture.jpeg" alt="Microservice Project Architecture - Generic IoT Skeleton"><p>Designing an IoT solution by considering all the scenarios is very important for the long run of the product. Will describe a Microservice-based IoT Solution Architecture in this article. </p><h2 id="background"><strong>Background</strong></h2><p>Before you finish reading this idea of architecting IoT solution, hundreds of new devices will be connected to the web at a breathtaking pace of 127 additions per minute. For the most part, they belong to the Internet of Things (IoT), or gadgets capable of communicating and sharing data without human interaction.</p><p>The technology will shift into an even higher gear with the wide use of fifth-generation or 5G networks supporting a million gadgets per square kilometer &#x2014; ten times as many as in the 4G era. The number of active IoT connections is expected to double by 2025.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://nhossaincse.com/content/images/2022/08/image-6.png" class="kg-image" alt="Microservice Project Architecture - Generic IoT Skeleton" loading="lazy" width="1467" height="839" srcset="https://nhossaincse.com/content/images/size/w600/2022/08/image-6.png 600w, https://nhossaincse.com/content/images/size/w1000/2022/08/image-6.png 1000w, https://nhossaincse.com/content/images/2022/08/image-6.png 1467w" sizes="(min-width: 1200px) 1200px"><figcaption>Figure 1: The growth in connected devices over the 2015-2025 decade. Source:<a href="https://iot-analytics.com/state-of-the-iot-update-q1-q2-2018-number-of-iot-devices-now-7b/"> IoT Analytics</a></figcaption></figure><p>Day by day, the IoT sees wider adoption, opening new opportunities and driving more value to both businesses and their clients. For companies, incorporating the consistent IoT strategy into daily routine means continuous access to valuable data about products and processes that can be translated into reduced expenses, improved efficiency in logistics and maintenance, better products, and enhanced customer experience.</p><p>But to build and run a robust infrastructure, a manufacturer or service provider needs a solid foundation &#x2014; in other words, an IoT platform that connects devices, collects data, and creates insights. This article will explain the IoT ecosystem, outline the core IoT platform functionality, and compare major players to help you make the right choice.</p><h2 id="problem-statement-issue-or-challenge"><strong>Problem statement (Issue or Challenge)</strong></h2><p>There are multiple layers to consider before designing the solution. With an IoT platform acting as a bridge between the physical world and business processes, the IoT infrastructure contains several key layers:</p><ul><li>Perception layer (hardware components such as sensors, actuators, and devices)</li><li>Transport layer (networks and gateway)</li><li>Processing layer (middleware or IoT platforms) and</li><li>Application layer (software solutions for end users)</li></ul><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://nhossaincse.com/content/images/2022/08/image-7.png" class="kg-image" alt="Microservice Project Architecture - Generic IoT Skeleton" loading="lazy" width="1911" height="1021" srcset="https://nhossaincse.com/content/images/size/w600/2022/08/image-7.png 600w, https://nhossaincse.com/content/images/size/w1000/2022/08/image-7.png 1000w, https://nhossaincse.com/content/images/size/w1600/2022/08/image-7.png 1600w, https://nhossaincse.com/content/images/2022/08/image-7.png 1911w"><figcaption>Figure 2: How an IoT system works</figcaption></figure><h3 id="perception-layer-iot-hardware">Perception layer: IoT hardware</h3><p>The hardware or &#x201C;things&#x201D; layer includes the following gears that work with signals from the physical world.</p><p><strong>Electronic sensors </strong>capture signals from the physical world, convert them into digital form, and feed them to the IoT system. You can monitor and manage sensors remotely, using a special application.</p><p><strong>Actuators</strong> receive signals from the IoT system and translate them into physical actions manipulating equipment. Similar to sensors, actuators can be configured from remote computers.</p><p><strong>Devices</strong> are connected to sensors or even have them embedded as an integral part. On the other side, devices link to a gateway or directly to an IoT platform. These hardware components cache and preprocess real-time data, reducing the burden on central storages and main processors.</p><h3 id="transport-layer-networks-and-gateways">Transport layer: networks and gateways</h3><p>The transport layer is responsible for smooth and secure data transmission from the perception layer to the processing layer. It encompasses wired or wireless networks and a gateway &#x2014; a hardware or software module that consolidates data from devices, analyzes it, performs translation between different protocols, and forwards information to the cloud. As a rule, the gateway converts all information into MQTT messages &#x2014; the lightweight protocol most widely used in the IoT.</p><h3 id="processing-layer-cloud-middleware-or-iot-platforms">Processing layer: cloud middleware or IoT platforms</h3><p>Here we come to an IoT platform or middleware that actually drives IoT, enabling you to get all components and data streams connected. On the one side, it links to gateways or devices, and on the other side integrates with third-party applications and systems via APIs.</p><p>Typically, fully-fledged platforms take care of such important tasks:</p><ul><li><strong>connectivity</strong> or ensuring smooth data streaming and interactions between all IoT components;</li><li><strong>device management,</strong> which enables you to control and configure each piece of hardware in the IoT network as well as update software running on devices and gateways;</li><li><strong>data management, </strong>including data collection, processing, and storage;</li><li><strong>data analysis </strong>for extracting valuable patterns with machine learning, predictive analytics, and other methods;</li><li><strong>visualization </strong>or displaying data findings in the form of charts, graphs, 2D or 3D models;</li><li><strong>digital twin</strong> or creating the virtual representation of a device;</li><li><strong>IoT app development</strong> &#x2014; platforms provide a workspace with a set of tools and templates to speed up app designing; and</li><li><strong>edge/fog computing </strong>&#x2014; the practice of processing and storing data on devices, microcontrollers, gateways, and other IoT nodes to reduce the burden on cloud servers.</li></ul><h3 id="application-layer-software-solutions-for-users">Application layer: software solutions for users</h3><p>IoT software solutions allow end users to gain data insights, monitor and control devices, and, generally, manipulate the physical world through the IoT platform from computers and/or smartphones. Applications can be built on top of the IoT platform or integrated with it through APIs.</p><p>Now, with the place of IoT platforms in the connected ecosystem more clearly defined, let&#x2019;s concentrate on their functions, options available on the market, and how to choose between them.</p><h2 id="possible-solutions"><strong>Possible solutions</strong></h2><p>The proposed architecture of a generic IoT system can be visualized as below:</p><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://nhossaincse.com/content/images/2022/08/IoT_Generic_Architecture-1.jpg" class="kg-image" alt="Microservice Project Architecture - Generic IoT Skeleton" loading="lazy" width="1832" height="957" srcset="https://nhossaincse.com/content/images/size/w600/2022/08/IoT_Generic_Architecture-1.jpg 600w, https://nhossaincse.com/content/images/size/w1000/2022/08/IoT_Generic_Architecture-1.jpg 1000w, https://nhossaincse.com/content/images/size/w1600/2022/08/IoT_Generic_Architecture-1.jpg 1600w, https://nhossaincse.com/content/images/2022/08/IoT_Generic_Architecture-1.jpg 1832w"><figcaption>Figure 3: IoT Generic Platform - Microservice Architecture</figcaption></figure><p>There are multiple components in the architecture. A quick view of the components as the list given below:</p><!--kg-card-begin: markdown--><ol>
<li><strong>Sensors / Edges</strong>
<ul>
<li>Sensor / Edge Driven
<ul>
<li>Broker (EMQX / KAFKA)</li>
<li>HTTP</li>
</ul>
</li>
<li>Host Driven
<ul>
<li>Trigger (i.e: serverless azure function) / Scheduler (hangfire, Quartz)</li>
</ul>
</li>
</ul>
</li>
<li><strong>Data Source Gateway</strong></li>
<li><strong>Extract Transform Load (ETL) engine</strong></li>
<li><strong>Database</strong>
<ul>
<li>Relational: (PostgreSQL with Timescale DB / SQL Server)</li>
<li>No-SQL: Redis</li>
</ul>
</li>
<li><strong>App Tier</strong>
<ul>
<li>API</li>
<li>Redis</li>
<li>Lazy Cache</li>
</ul>
</li>
<li><strong>Web Tier</strong>
<ul>
<li>Web Apps / Mobile Apps</li>
<li>Grafana</li>
<li>Angular / React</li>
</ul>
</li>
<li><strong>Separate Internal Messaging Broker</strong>
<ul>
<li>EMQX / MQTT / Kafka</li>
</ul>
</li>
<li><strong>Notification Hub</strong>
<ul>
<li>Redis as its own DB</li>
<li>Alert Rules</li>
<li>Notify
<ul>
<li>Email</li>
<li>SMS</li>
<li>Push Notification</li>
</ul>
</li>
</ul>
</li>
<li>End User</li>
</ol>
<p><strong>Future Considerations:</strong></p>
<ol>
<li>IP whitelist from the sources to tighten the data flow.</li>
<li>Compress and encrypt the payload during communication.</li>
</ol>
<!--kg-card-end: markdown--><p></p><h2 id="conclusion"><strong>Conclusion </strong></h2><p>Above architecture is a proposed generic solution architecture to start with, but based on domain and requirements the actual system, design components may vary.</p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Kubernetes - Debugging Internal Connectivity of Pods]]></title><description><![CDATA[Going to show the common facing problem to check the connectivity between pods from different service points. If the service can not connect to the desired service via the service endpoint. Then what are the debugging steps we can follow to rectify that?]]></description><link>https://nhossaincse.com/kubernetes-debugging-internal-connectivity-of-pods/</link><guid isPermaLink="false">62f68eafc8533a0001cc4b15</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Sat, 13 Aug 2022 15:28:54 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1504639725590-34d0984388bd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDMyfHxjb2RlfGVufDB8fHx8MTY2MDMyNjQyOQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="purpose"><strong>Purpose </strong></h2><img src="https://images.unsplash.com/photo-1504639725590-34d0984388bd?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDMyfHxjb2RlfGVufDB8fHx8MTY2MDMyNjQyOQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Kubernetes - Debugging Internal Connectivity of Pods"><p>Going to show the common facing problem to check the connectivity between pods from different service points. If the service can not connect to the desired service via the service endpoint. Then what are the debugging steps we can follow to rectify that?</p><h2 id="background"><strong>Background</strong></h2><p>We have two applications, each application running with two replica sets (i.e. 2 pods). The application names are <code>app1</code> and <code>app2</code>. </p><p>Manifest for both apps are as given below:</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1-deployment
  labels:
    app: app1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
      - name: app1
        image: nhossaincse/apiapp1:latest
        args:
          - --text=hello-world
        imagePullPolicy: Always
        ports:
        - containerPort: 80</code></pre><figcaption>app1-deployment.yaml</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: app2-deployment
  labels:
    app: app2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app2
  template:
    metadata:
      labels:
        app: app2
    spec:
      containers:
      - name: app2
        image: nhossaincse/apiapp1:latest
        args:
          - --text=hello-world
        imagePullPolicy: Always
        ports:
        - containerPort: 8081</code></pre><figcaption>app2-deployment.yaml</figcaption></figure><p>We see that <code>app1</code> running <code>80</code> post and the <code>app2</code> running on <code>8081</code> port. We used a test image as the base of the apps, which is <code>nhossaincse/apiapp1</code>. <code>apiapp1</code> is an in-memory web API, for a get request it serves the JSON. This is especially useful for demos or a more extensive &quot;weatherforecast&quot; Docker application. This is best for our case to simulate the scenario. As we are going to check the HTTP API call from <code>app1</code> to <code>app2</code>.</p><p>Now we need connectivity of these two apps. </p><blockquote>An example to simplify the objective is, we need to call an API of <code>app2</code> from <code>app1</code>. &#xA0;</blockquote><p>To do this, first how <code>app1</code> will reach to <code>app2</code>? This is elaboratively described in my earlier post which can be read from this link. </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://nhossaincse.com/exposing-applications-for-internal-access/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Kubernetes - Exposing Applications for Internal Access</div><div class="kg-bookmark-description">Purpose Often we need to communicate between pods internally deployed in Kubernetes. As I said, internally means wanting to communicate without getting any public IP (which costs an extra $) rather in a local area network(LAN) manner. Background When pods are created, they are assigned an IP ad&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://nhossaincse.com/content/images/2022/07/favicon-2.ico" alt="Kubernetes - Debugging Internal Connectivity of Pods"><span class="kg-bookmark-author">Nazmul Hossain</span><span class="kg-bookmark-publisher">Md. Nazmul Hossain</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://nhossaincse.com/content/images/2022/08/Kubernetes_logo-2.jpg" alt="Kubernetes - Debugging Internal Connectivity of Pods"></div></a></figure><p>So, as described in that post we are going to use the best to reach <code>app2</code> from <code>app1</code> with service. </p><p>Hence, we have now two apps and two services associated with each app. The service names are as <code>app1-service</code> and <code>app2-service</code> respectively. </p><p>Now the manifests for the two services are given below:</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">apiVersion: v1
kind: Service
metadata:
  name: app1-service
spec:
  selector:
    app: app1
  ports:
    - protocol: TCP
      port: 5678
      targetPort: 80</code></pre><figcaption>app1-service.yaml</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-yaml">apiVersion: v1
kind: Service
metadata:
  name: app2-service
spec:
  selector:
    app: app2
  ports:
    - protocol: TCP
      port: 5678
      targetPort: 8081</code></pre><figcaption>app2-service.yaml</figcaption></figure><h2 id="problem-statement-issue-or-challenge"><strong>Problem statement (Issue or Challenge)</strong></h2><p>Now we will try to reach <code>app2</code> from <code>app1</code>. </p><p>To do this first step is to go get into <code>app1</code>. We can do this by getting in an interactive session on <code>app1</code>. As <code>app1</code> running with two pods, from anyone we can do the test. To get the list of pods below is the command:</p><pre><code class="language-shell">$ kubectl get pods</code></pre><p>The output of the running pods can be as below:</p><pre><code class="language-shell">NAME                                 READY   STATUS    RESTARTS      AGE
app1-deployment-596b67c689-4m28f     1/1     Running   0             10m
app1-deployment-596b67c689-6qkqw     1/1     Running   0             10m
app2-deployment-64ddcf47dd-8qfh4     1/1     Running   0             10m
app2-deployment-64ddcf47dd-bb8dt     1/1     Running   0             10m</code></pre><p>We can do this by getting in an interactive session on <code>app1</code>. The sample command to do this is as below:</p><pre><code class="language-shell">$ kubectl exec -it &lt;pod_name&gt; /bin/bash </code></pre><p>In our case, we want to do this for <code>app1</code> pod so for the first pod to get into the interactive session:</p><pre><code class="language-shell">$ kubectl exec -it app1-deployment-596b67c689-4m28f /bin/bash</code></pre><p>Now we are in interactive mode inside the pod (think of this one as an independent Linux machine). Now we need to install some basic network packages inside the pod.</p><pre><code class="language-shell">$ apt update -y &amp;&amp; apt install net-tools -y &amp;&amp; apt install dnsutils -y &amp;&amp; apt install curl -y</code></pre><p>Now to get &#xA0;the <code>app2</code> <code>Fully Qualified Domain Name (FQDN)</code>, inside here we can use the <code>nslookup</code> command the argument with the &lt;service_name&gt;, in our case which is `app2-service`:</p><pre><code class="language-shell">[ root@curl:/ ]$ nslookup app2-service
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      app2-service
Address 1: 10.106.29.144 app2-service.default.svc.cluster.local</code></pre><p>We see that, the FQDN for the <code>app2-service</code> is <code>app2-service.default.svc.cluster.local</code>.</p><p>Now, we can do the <code>curl</code> to check the connectivity to <code>app2-service</code>.</p><pre><code class="language-shell">[ root@curl:/ ]$ curl app2-service.default.svc.cluster.local:5679
curl: (7) Failed to connect to app2-service.default.svc.cluster.local port 5679: Connection refused</code></pre><p>And, we see we got the error as <code>port 5678: Connection refused</code>. </p><h2 id="possible-solutions"><strong>Possible solutions</strong></h2><p>First, we need to check which port the port listening to. To do this we can run the below command to check which port actually the <code>app2</code> pods listening to. In order to achieve this, we need to access <code>app2</code> the pod as we did for <code>app1</code> the pod with the below command in interactive mode. But, don&apos;t forget we need to install basic network utility as we did for <code>app1</code> like running the command as <code>$ apt update -y &amp;&amp; apt install net-tools -y &amp;&amp; apt install dnsutils -y &amp;&amp; apt install curl -y</code>.</p><p>Then, the below command will take us inside one of the pods of two with the pod name. Definitely, the pod name at your side will be different and you need to get that by running the command <code>kubectl get pods</code>.</p><pre><code class="language-shell">$ kubectl exec -it app2-deployment-9598898-lm5sf /bin/bash</code></pre><p>Now with the below command, we can check the ports that <code>app2</code> listening to with the following command?</p><pre><code class="language-shell">$ netstat -ntlp</code></pre><p>The result would be like the below:</p><pre><code class="language-shell">root@app2-deployment-9598898-lm5sf:/app# netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp6       0      0 :::80                   :::*                    LISTEN      1/dotnet</code></pre><p>We see that the port is listening to <code>80</code>. So we need to correct in our manifest for the <code>app2-deployment</code> and <code>app2-service.yaml</code> with the port # <code>80</code></p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: app2-deployment
  labels:
    app: app2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app2
  template:
    metadata:
      labels:
        app: app2
    spec:
      containers:
      - name: app2
        image: hashicorp/http-echo
        args:
          - --text=hello-world
        imagePullPolicy: Always
        ports:
        - containerPort: 80</code></pre><figcaption>app2-deployment.yaml</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-yaml">apiVersion: v1
kind: Service
metadata:
  name: app2-service
spec:
  selector:
    app: app2
  ports:
    - protocol: TCP
      port: 5678
      targetPort: 80</code></pre><figcaption>app2-service.yaml</figcaption></figure><p>After that the <code>curl</code> to check the connectivity with the below command from the <code>app1</code> pod as below:</p><pre><code class="language-shell">root@app1-deployment-96b7f85bc-j5649:/app# curl app2-service.default.svc.cluster.local:5678/weatherforecast
[{&quot;date&quot;:&quot;2022-08-14T15:17:46.0946196+00:00&quot;,&quot;temperatureC&quot;:-2,&quot;summary&quot;:&quot;Freezing&quot;,&quot;temperatureF&quot;:29},{&quot;date&quot;:&quot;2022-08-15T15:17:46.095428+00:00&quot;,&quot;temperatureC&quot;:11,&quot;summary&quot;:&quot;Bracing&quot;,&quot;temperatureF&quot;:51},{&quot;date&quot;:&quot;2022-08-16T15:17:46.0954307+00:00&quot;,&quot;temperatureC&quot;:37,&quot;summary&quot;:&quot;Freezing&quot;,&quot;temperatureF&quot;:98},{&quot;date&quot;:&quot;2022-08-17T15:17:46.0954308+00:00&quot;,&quot;temperatureC&quot;:31,&quot;summary&quot;:&quot;Bracing&quot;,&quot;temperatureF&quot;:87},{&quot;date&quot;:&quot;2022-08-18T15:17:46.095431+00:00&quot;,&quot;temperatureC&quot;:53,&quot;summary&quot;:&quot;Sweltering&quot;,&quot;temperatureF&quot;:127}]root@app1-deployment-96b7f85bc-j5649:/app#</code></pre><p>Finally, success! We can now connect the <code>app2</code> form &#xA0;<code>app1</code>. This process to connect rooms <code>app2</code> to <code>app1</code> will be the same as well. API call result with <code>curl</code> from <code>app2</code> to <code>app1</code> as below. </p><pre><code class="language-shell">root@app2-deployment-788d664db7-kst6d:/app# curl app1-service.default.svc.cluster.local:5678/weatherforecast
[{&quot;date&quot;:&quot;2022-08-14T15:23:39.454618+00:00&quot;,&quot;temperatureC&quot;:17,&quot;summary&quot;:&quot;Balmy&quot;,&quot;temperatureF&quot;:62},{&quot;date&quot;:&quot;2022-08-15T15:23:39.4554411+00:00&quot;,&quot;temperatureC&quot;:0,&quot;summary&quot;:&quot;Chilly&quot;,&quot;temperatureF&quot;:32},{&quot;date&quot;:&quot;2022-08-16T15:23:39.4554442+00:00&quot;,&quot;temperatureC&quot;:32,&quot;summary&quot;:&quot;Warm&quot;,&quot;temperatureF&quot;:89},{&quot;date&quot;:&quot;2022-08-17T15:23:39.4554444+00:00&quot;,&quot;temperatureC&quot;:20,&quot;summary&quot;:&quot;Hot&quot;,&quot;temperatureF&quot;:67},{&quot;date&quot;:&quot;2022-08-18T15:23:39.4554445+00:00&quot;,&quot;temperatureC&quot;:-15,&quot;summary&quot;:&quot;Sweltering&quot;,&quot;temperatureF&quot;:6}]root@app2-deployment-788d664db7-kst6d:/app#</code></pre><h2 id="conclusion"><strong>Conclusion</strong></h2><p>We can use these steps to debug the connectivity problems between apps deployed in Kubernetes. This will help to detect the pain point in a quick manner. </p>]]></content:encoded></item><item><title><![CDATA[Kubernetes - Exposing Applications for Internal Access]]></title><description><![CDATA[<h2 id="purpose"><strong>Purpose</strong></h2><p>Often we need to communicate between pods internally deployed in Kubernetes. As I said, internally means wanting to communicate without getting any public IP &#xA0;(which costs an extra $) rather in a local area network(LAN) manner. </p><h2 id="background"><strong>Background</strong></h2><p>When pods are created, they are assigned an IP address. You</p>]]></description><link>https://nhossaincse.com/exposing-applications-for-internal-access/</link><guid isPermaLink="false">62f63f07c8533a0001cc49e0</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Fri, 12 Aug 2022 16:29:01 GMT</pubDate><media:content url="https://nhossaincse.com/content/images/2022/08/Kubernetes_logo-2.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="purpose"><strong>Purpose</strong></h2><img src="https://nhossaincse.com/content/images/2022/08/Kubernetes_logo-2.jpg" alt="Kubernetes - Exposing Applications for Internal Access"><p>Often we need to communicate between pods internally deployed in Kubernetes. As I said, internally means wanting to communicate without getting any public IP &#xA0;(which costs an extra $) rather in a local area network(LAN) manner. </p><h2 id="background"><strong>Background</strong></h2><p>When pods are created, they are assigned an IP address. You use this IP to access the pod from anywhere within the Kubernetes cluster. Containers inside a pod share the same network space, which means that, within the pod, containers can communicate with each other by using the <code>localhost</code> address.</p><p>A Kubernetes cluster might be split across different nodes. A node is a physical machine where resources run. A cluster is a logical view of a set of nodes. These nodes are different machines, but they work together as a logical unit. This makes it easier to work with different machines at the same time because you can simply deploy resources to the cluster and not to individual nodes.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nhossaincse.com/content/images/2022/08/image-4.png" class="kg-image" alt="Kubernetes - Exposing Applications for Internal Access" loading="lazy" width="903" height="704" srcset="https://nhossaincse.com/content/images/size/w600/2022/08/image-4.png 600w, https://nhossaincse.com/content/images/2022/08/image-4.png 903w" sizes="(min-width: 720px) 720px"><figcaption>Figure 1: Internal Networking in a Kubernetes Cluster</figcaption></figure><h2 id="problem-statement-issue-or-challenge"><strong>Problem Statement (Issue or Challenge)</strong></h2><p>In a real-world environment, deployments are performed on a daily basis. When bugs are fixed or new features are added to an application, a new image version is created and deployed. This means that pods are constantly created and destroyed (the pods of the older version are removed and new pods are allocated for the newer version).</p><p>At the same time, applications usually have several replicas, and traffic is split across the replicas. This ensures that no single replica is overworked. This is called <code>load-balancing</code>.</p><p>In both use cases, the problem is the same: you need a way to reach the pods regardless of the machine where they are located.</p><h2 id="possible-solutions"><strong>Possible solutions</strong></h2><p>To solve this, Kubernetes introduces the concept of <code>Service</code>.</p><p>A service is an abstraction that defines access to a set of pods. By using a service, you don&#x2019;t access pods directly through their private IP addresses. Instead, a service targets several pods based on certain criteria (for example, a label) and forwards any requests to one of the pods matching that criteria.</p><p>In other words, a service allows you to group pods with a logical relationship and it allows you to reach them in a reliable way. At the same time, it implements a load-balancing mechanism among the pods that it targets.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nhossaincse.com/content/images/2022/08/image-1.png" class="kg-image" alt="Kubernetes - Exposing Applications for Internal Access" loading="lazy" width="1087" height="601" srcset="https://nhossaincse.com/content/images/size/w600/2022/08/image-1.png 600w, https://nhossaincse.com/content/images/size/w1000/2022/08/image-1.png 1000w, https://nhossaincse.com/content/images/2022/08/image-1.png 1087w" sizes="(min-width: 720px) 720px"><figcaption>Figure 2: Service acting as Access Point to Pods which redirecting traffic to pod replicas and as well as a Load Balancer</figcaption></figure><p>For example, if you want to have three replicas of your application then three pods will be created. If you create a service that targets these pods, then the service receives any incoming requests and routes them to one of them.</p><p>By default, a service is given a cluster-internal IP address, which is only valid within the cluster. This type of service is called <code>ClusterIP</code>. This means that pods deployed in the cluster can make requests to the service by using the ClusterIP.</p><p>The following diagram illustrates the communication between pods and services. For example, <code>Pod 1</code> uses the ClusterIP of <code>Service 2</code> to make requests to the service.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://nhossaincse.com/content/images/2022/08/image-3.png" class="kg-image" alt="Kubernetes - Exposing Applications for Internal Access" loading="lazy" width="1164" height="737" srcset="https://nhossaincse.com/content/images/size/w600/2022/08/image-3.png 600w, https://nhossaincse.com/content/images/size/w1000/2022/08/image-3.png 1000w, https://nhossaincse.com/content/images/2022/08/image-3.png 1164w" sizes="(min-width: 720px) 720px"><figcaption>Figure 3: Pod communication using Services</figcaption></figure><p><strong>Creating Kubernetes Services</strong></p><p>When creating a service, it is necessary to define the <code>port</code> that the service will serve on. This port is mapped to a <code>target port</code> inside the pod that the service targets. Incoming requests to the service in <code>port</code> are forwarded to the <code>target port</code> in the pod. If no <code>target port</code> is provided, then the <code>port</code> value is used.</p><p>There are two ways to create a service in Kubernetes:</p><ul><li><strong>Using <code>kubectl expose</code></strong></li></ul><p>The easiest way to create a service is by using the <code>kubectl expose</code> command.</p><pre><code class="language-powershell">$ kubectl expose deployment deployment-name --port=8081 --name=service-name --target-port=3000</code></pre><p>The previous command creates a service named <code>service-name</code>, which targets deployment <code>deployment-name</code>. It listens on port 80 and it points to port 3000 inside the pod.</p><p>Use the command <code>kubectl get service</code> to list the services available. The output will provide you with information such as the ClusterIP (IP only valid within the Kubernetes cluster) and the port used to access the service. A sample output might look like this:</p><pre><code class="language-powershell">NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service-name   ClusterIP   10.105.52.105   &lt;none&gt;        80/TCP     3s</code></pre><ul><li><strong>Applying a manifest</strong></li></ul><p>An approach in line with the DevOps principles is creating services through a manifest. The following sample creates a service named <code>app1-service</code> and targets any pod with the label <code>app: app1</code>. The service listens for requests in port 8080 and forwards them to port 3000 inside the pod. Because the manifest does not include the <code>type</code> field, it creates a service with a type <code>ClusterIP</code>.</p><pre><code class="language-powershell">apiVersion: v1
kind: Service
metadata:
  name: app1-service (1)
spec:
  selector: (2)
    app: app1
  ports: (3)
    - protocol: TCP
      port: 8081 (4)
      targetPort: 3000 (5)</code></pre><ol><li>Name of the service</li><li>Labels used to select the target pods</li><li>Port mapping</li><li>The port that the service will serve on</li><li>The port inside the pod where requests are forwarded</li></ol><p><strong>Discovering Kubernetes Services</strong></p><p>A service abstracts your application from knowing the exact location of the pods you are using, but you still need to know the IP of the service to use it from your application. Directly using the IP is a bad idea because if the IP changes in the future, then you would need to manually update it in your application. To avoid this, Kubernetes provides two ways to discover services:</p><ul><li><strong>Environment variables</strong></li></ul><p>By default, when a service is created, Kubernetes injects some environment variables in pods within the same namespace. These variables follow the pattern:</p><pre><code class="language-powershell">SERVICE-NAME_VARIABLE-NAME</code></pre><p>If you have a service named <code>app1-service</code>, that generates the following variables (non-exhaustive) then you can simply inject these environment variables into your application:</p><ul><li><code>APP1-SERVICE_SERVICE_HOST</code>, which contains the IP address of the <code>Service</code>. For example, <code>10.0.0.11</code></li><li><code>APP1-SERVICE_SERVICE_PORT</code>, which contains the port where <code>Service</code> listens on. For example, <code>6379</code></li></ul><p>However, your application tries to fetch the environment variables only on start-up. This means that if the value of the variable changes (for example, a service gets a different IP) after your application has started, then your application is not notified and it references an invalid value (the previous IP address of the service). The same happens if the service is created after your application boots up.</p><ul><li><strong>DNS</strong></li></ul><p>Given the limitations of the Kubernetes built-in environment variables, the preferred way of accessing services from your application is using DNS.</p><p>Every service in the cluster is assigned a DNS name, which matches with the service&#x2019;s lower cased name. This allows applications to access services using always the same reference. The default FQDN follows the pattern:</p><pre><code class="language-powershell">service.namespace.svc.cluster.local</code></pre><p>However, it is possible to avoid this long form. The DNS server also resolves the following hosts: </p><ul><li><em>service</em>.<em>namespace</em>.cluster.local</li><li><em>service</em>.<em>namespace</em></li><li><em>service</em> (in this case, Kubernetes expects the service to be in the same namespace)</li></ul><p>For example, if you have a service named <code>app1-service</code> that exposes an HTTP endpoint in the default HTTP port (80), then you can use <code>http://app1-service</code> if your application is in the same namespace as the service. If the service was in a namespace named <code>namespace-a</code>, then you use <code>http://app1-service.namespace-a</code>.</p><h2 id="conclusion"><strong>Conclusion</strong></h2><p>Now we should be able to enable intra-pod network communications for applications deployed in Kubernetes and learn how to keep communication up even with automatic deployments. This is an effective way for internal communication of pods as there are no extra costs like LoadBalancer/ Ingress Controller. </p>]]></content:encoded></item><item><title><![CDATA[Using Refit to Consume APIs in C#]]></title><description><![CDATA[Consuming APIs over HTTP is a very common scenario when building applications. In this article, we are going to explore Refit to consume APIs in C#.]]></description><link>https://nhossaincse.com/using-refit-to-consume-apis-in-cs/</link><guid isPermaLink="false">62f88a3fc8533a0001cc4e20</guid><category><![CDATA[.NET]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Sun, 03 Jul 2022 05:51:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1517134191118-9d595e4c8c2b?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDIzfHxjb2RlJTIwfGVufDB8fHx8MTY2MDQ1NjU0Ng&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1517134191118-9d595e4c8c2b?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDIzfHxjb2RlJTIwfGVufDB8fHx8MTY2MDQ1NjU0Ng&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Using Refit to Consume APIs in C#"><p>Consuming APIs over HTTP is a very common scenario when building applications. In this article, we are going to explore Refit to consume APIs in C#.</p><p>Let&#x2019;s dive in.</p><h2 id="what-is-refit"><strong>What is Refit?</strong></h2><p>The <a href="https://github.com/reactiveui/refit" rel="nofollow noopener"><strong>Refit library</strong></a> for C# provides us with a <strong><strong>type-safe wrapper for interacting with HTTP-based APIs</strong></strong>. Instead of using <code>HttpClient</code>, which is provided for us by ASP.NET Core, we can define an interface that represents the API we want to interact with.</p><p>With this interface, we define the endpoints (GET, POST, PUT) our API contains, along with any route or body parameters. Also, we can include headers in the interface, such as ones for Authorization.</p><h2 id="components-of-a-refit-client"><strong>Components of a Refit Client</strong></h2><p>Before creating an application to demonstrate Refit, let&#x2019;s explore some of the main components that make up a Refit client.</p><h3 id="http-methods"><strong>HTTP Methods</strong></h3><p>Any time we interact with an API over HTTP, we must be familiar with the different <strong>HTTP methods</strong> available to us, and how they work. Refit provides a set of attributes that allow us to decorate our interface methods:</p><pre><code class="language-csharp">[Get(&quot;/users&quot;)]
Task&lt;IEnumerable&lt;User&gt;&gt; GetUsers();</code></pre><p>By decorating the <code>GetUsers()</code> method with <code>[Get(&quot;/users&quot;)]</code>, we tell Refit this is an HTTP GET method, to the <code>/users</code> endpoint.</p><p>Refit provides attributes for all the common HTTP methods.</p><h3 id="route-parameters"><strong>Route Parameters</strong></h3><p>When working with RESTful APIs that follow good <strong>routing conventions</strong>, we&#x2019;ll often see an endpoint like <code>/users/1</code>, which we would expect to return us a user with id 1. Refit uses attribute routing, the same as ASP.NET Core, that allows us to easily define routes that contain parameters: </p><pre><code class="language-csharp">[Get(&quot;/users/{id}&quot;)]
Task&lt;User&gt; GetUser(int id);</code></pre><p>By adding <code>{</code> and <code>}</code> around <code>id</code> in the route, we tell Refit that this is a dynamic parameter that comes from the <code>id</code> parameter in the <code>GetUser()</code> method.</p><h3 id="request-and-response-serialization"><strong>Request and Response Serialization</strong></h3><p>The most common way to send data over HTTP is by <strong>serializing it as JSON</strong> and adding it to the request body. Refit provides this automatically for us.</p><p>This allows us to provide classes as parameters to a Refit method, and also specify them as the return type that we expect to be returned from the API:</p><pre><code class="language-csharp">[Put(&quot;/users/{id}&quot;)]
Task&lt;User&gt; UpdateUser(int id, User user);</code></pre><p>Refit will automatically serialize the <code>user</code> parameter to JSON when sending the request and will attempt to deserialize the response into a <code>User</code> object.</p><h2 id="instantiating-a-refit-client"><strong>Instantiating a Refit Client</strong></h2><p>Refit provides us with two ways to instantiate a client, either by using the <code>RestService</code> class provided by Refit, or by registering the Refit client with <code>HttpClientFactory</code>, and injecting the interface into a class constructor.</p><p>Let&#x2019;s assume we have an API for interacting with users, along with a Refit interface:</p><pre><code class="language-csharp">public interface IUsersClient
{
    [Get(&quot;/users&quot;)]
    Task&lt;IEnumerable&lt;User&gt;&gt; GetUsers();
}</code></pre><p>First, we can instantiate the client using the <code>RestService</code> class:</p><pre><code class="language-csharp">var usersClient = RestService.For&lt;IUsersClient&gt;(&quot;https://mywebapi.com&quot;);
var users = await usersClient.GetUsers();</code></pre><p>We can also register the client with &#xA0;<code>HttpClientFactory</code> provided by ASP.NET Core:</p><pre><code class="language-csharp">services
    .AddRefitClient&lt;IUsersClient&gt;()
    .ConfigureHttpClient(c =&gt; c.BaseAddress = new Uri(&quot;https://mywebapi.com&quot;));</code></pre><p>Both of these are valid ways to register and use Refit clients. </p><p>However, <strong><strong>if we want to make our code more maintainable and testable, registering the client with </strong></strong><code>HttpClientFactory</code> <strong><strong>and injecting it into the required class constructors is the way to go</strong></strong>. This allows us to easily inject a mock of the interface for testing purposes, without having to rely on any of the implementation details of either <code>HttpClient</code> or the Refit library.</p><p>Because of this, we will opt for the latter method for the rest of this article.</p><h2 id="setting-up-an-api"><strong>Setting up an API</strong></h2><p>Instead of setting up a new API from scratch, we can use <a href="https://jsonplaceholder.typicode.com/" rel="nofollow noopener"><strong>JSONPlaceholder</strong></a>. It is a free, fake API that can be used for testing, and fits our needs perfectly. It provides various resources to interact with, but for this demo, we&#x2019;ll use the <code>users</code> resource.</p><h2 id="creating-console-application"><strong>Creating Console Application</strong></h2><p>With our API solution chosen, let&#x2019;s create a console application, either through the Visual Studio template or by using <code>dotnet new console</code>.</p><p>We must also add the Refit library from NuGet. As we will be using the <code>HttpClientFactory</code> registration method, we need to add two packages:</p><ul><li>Refit</li><li>Refit.HttpClientFactory</li></ul><p>As we have chosen the <code>users</code> resource, we&#x2019;ll create a User model:</p><pre><code class="language-csharp">public class User
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string Email { get; set; }

    public override string ToString() =&gt;
        string.Join(Environment.NewLine, $&quot;Id: {Id}, Name: {Name}, Email: {Email}&quot;);
}</code></pre><p>We override the <code>ToString()</code> method so we can easily display the users retrieved from the API in the console.</p><p>Now we can create our Refit interface.</p><h2 id="implementing-refit-client"><strong>Implementing Refit Client</strong></h2><p>We start by creating an interface and defining a <code>GetAll</code> method:</p><pre><code class="language-csharp">public interface IUsersClient
{
    [Get(&quot;/users&quot;)]
    Task&lt;IEnumerable&lt;User&gt;&gt; GetAll();
}</code></pre><p>To turn this interface into a Refit client, we add the <code>Get</code> attribute to the <code>GetAll()</code> method, and define the route as <code>/users</code>. As the API will return us a list of users, the method return type is an <code>IEnumerable&lt;User&gt;</code>.</p><p>This is enough to get us started.</p><h2 id="consuming-api-data"><strong>Consuming API Data</strong></h2><p>As we&#x2019;ve opted to register our Refit client with the ASP.NET Core dependency injection framework, we need to add the <code>Microsoft.Extensions.Hosting</code> NuGet package to our console application.</p><p>With this done, let&#x2019;s register the Refit client in the <code>Program</code> class:</p><pre><code class="language-csharp">using IHost host = Host.CreateDefaultBuilder(args)
    .ConfigureServices((_, services) =&gt;
    {
        services
            .AddRefitClient&lt;IUsersClient&gt;()
            .ConfigureHttpClient(c =&gt; c.BaseAddress = new Uri(&quot;https://jsonplaceholder.typicode.com/&quot;));
    }).Build();</code></pre><p>We use the <code>AddRefitClient()</code> extension method to register the <code>IUsersClient</code> interface, and then configure the <code>HttpClient</code>, setting the <code>BaseAddress</code> to the JSONPlaceholder address.</p><p>With our service registration complete, we can retrieve an instance of <code>IUsersClient</code>, and retrieve some users:</p><pre><code class="language-csharp">var usersClient = host.Services.GetRequiredService&lt;IUsersClient&gt;();
var users = await usersClient.GetAll();

foreach (var user in users)
{
    Console.WriteLine(user);
}</code></pre><p>First, we retrieve an <code>IUsersClient</code> from the service collection, and call the <code>GetAll()</code> method to retrieve a list of users, which we then print to the console.</p><p>This demonstrates how simple it is to use a Refit client to abstract HTTP calls. We make a method call that returns our populated <code>User</code> model.</p><p>Next, let&#x2019;s explore some of the further capabilities of Refit, by adding more methods to <code>IUsersClient</code>.</p><h2 id="extending-iusersclient"><strong>Extending IUsersClient</strong></h2><p>Let&#x2019;s add some basic CRUD (Create, Read, Update, Delete) operations for our API:</p><pre><code class="language-csharp">public interface IUsersClient
{
    [Get(&quot;/users&quot;)]
    Task&lt;IEnumerable&lt;User&gt;&gt; GetAll();

    [Get(&quot;/users/{id}&quot;)]
    Task&lt;User&gt; GetUser(int id);

    [Post(&quot;/users&quot;)]
    Task&lt;User&gt; CreateUser([Body] User user);

    [Put(&quot;/users/{id}&quot;)]
    Task&lt;User&gt; UpdateUser(int id, [Body] User user);

    [Delete(&quot;/users/{id}&quot;)]
    Task DeleteUser(int id);
}</code></pre><p>Firstly, we add the <code>GetUser()</code> method, which takes an <code>id</code> parameter to identify the user we want to retrieve. We decorate this method with the <code>Get</code> attribute, and in the route we define a dynamic parameter using <code>{</code> and <code>}</code>.</p><p>Next up is the <code>CreateUser()</code> method, which takes a <code>User</code> as a parameter, and because we want this to be passed in the HTTP request body, we decorate the parameter with the <code>Body</code> attribute. This time, it&#x2019;s a <code>Post</code> request that the API expects.</p><p>To update a user, we need a <code>Put</code> method, combining both a route parameter, <code>id</code>, and body content, which is the <code>User</code> we want to update.</p><p>Finally, to delete a user, we make a <code>Delete</code> request providing the <code>id</code> of the user to delete.</p><p>This gives us CRUD functionality on the Users API. Now we can test this out.</p><h2 id="testing-crud-functionality"><strong>Testing CRUD Functionality</strong></h2><p>Back in the <code>Program</code> class, let&#x2019;s start by creating a new user:</p><pre><code class="language-csharp">var user = new User
{
    Name = &quot;John Doe&quot;,
    Email = &quot;john.doe@test.com&quot;
};

var usersClient = host.Services.GetRequiredService&lt;IUsersClient&gt;();
var userId = (await usersClient.CreateUser(user)).Id;

Console.WriteLine($&quot;User with Id: {userId} created&quot;);</code></pre><p>Initially, we create a new <code>User</code> object. With this <code>user</code>, we call <code>CreateUser()</code>, which will return a User object, giving us the <code>Id</code> of the newly created user, which we log to the console.</p><p>Next, we can retrieve an existing user using the <code>GetUser()</code> method:</p><pre><code class="language-csharp">var existingUser = await usersClient.GetUser(1);</code></pre><p>With this user, let&#x2019;s update the <code>Email</code>:</p><pre><code class="language-csharp">existingUser.Email = &quot;john.doe@gmail.com&quot;;
var updatedUser = await usersClient.UpdateUser(existingUser.Id, existingUser);

Console.WriteLine($&quot;User email updated to {updatedUser.Email}&quot;);</code></pre><p>Here, we use the <code>UpdateUser()</code> method, passing in the Id of the user, along with the updated user object.</p><p>The final step is to delete the user:</p><pre><code class="language-csharp">await usersClient.DeleteUser(userId);</code></pre><p>We simply call <code>DeleteUser()</code>, providing the <code>userId</code> to delete.</p><p>This covers the basic CRUD functionality and shows how simply we can create an interface to interact with an API, without the need of handling complex HTTP logic with an <code>HttpClient</code>.</p><h2 id="conclusion"><strong>Conclusion</strong></h2><p>In this article, we&#x2019;ve learned how we can abstract interaction with HTTP-based APIs by using Refit and creating a simple interface for our API. This allowed us to avoid dealing with complex HTTP logic, such as creating request messages and deserializing responses and instead focus on the core logic relating to our applications.</p>]]></content:encoded></item><item><title><![CDATA[Search a String in Entire Database Tables Data and Stored Procedures]]></title><description><![CDATA[<h2 id="background-story"><strong>Background Story</strong></h2><p>You are assigned to a running project, with limited documentation. Live operations/transactions are going on in the project database. Now you need to understand the <code>dataflow</code>. Database <code>table</code> and <code>column</code> names are not much meaningful to you! Also, if the database has more than 50 <code>stored procedures</code></p>]]></description><link>https://nhossaincse.com/search-a-string-in-entire-database-tables-data-and-stored-procedures/</link><guid isPermaLink="false">62dbb67fc8533a0001cc4919</guid><category><![CDATA[SQL]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Sun, 12 Dec 2021 09:06:00 GMT</pubDate><media:content url="https://nhossaincse.com/content/images/2022/07/microsoft-sql-server595x3350_0-1.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="background-story"><strong>Background Story</strong></h2><img src="https://nhossaincse.com/content/images/2022/07/microsoft-sql-server595x3350_0-1.jpg" alt="Search a String in Entire Database Tables Data and Stored Procedures"><p>You are assigned to a running project, with limited documentation. Live operations/transactions are going on in the project database. Now you need to understand the <code>dataflow</code>. Database <code>table</code> and <code>column</code> names are not much meaningful to you! Also, if the database has more than 50 <code>stored procedures</code>, it is difficult to find some hard code logic of those scripts which one that logic resides?</p><h2 id="possible-approach"><strong>Possible Approach</strong></h2><p>If the project has any UI/Front End application, then from the project Front End/UI do some test entries and check which <code>table/columns</code> are storing those data. </p><p>Here, is one <code>generic</code> stored procedure to search with a keyword inside the entire database <code>tables</code> and <code>stored procedures</code>. It will give you the <code>location</code> of that keyword and which table/column or stored procedures it resides in.</p><pre><code class="language-SQL">CREATE PROCEDURE [dbo].[SearchAllTables] (
  @SearchStr nvarchar(100)
) AS BEGIN 

-- Purpose: To search all columns of all tables and stored procedures for a given search string
-- Written by: Md. Nazmul Hossain
-- Site: https://nhossaincse.com
-- Tested on: SQL Server 12.0

DECLARE @Results TABLE(
  ColumnName nvarchar(370), 
  ColumnValue nvarchar(3630)
) 
SET 
  NOCOUNT ON DECLARE @TableName nvarchar(256), 
  @ColumnName nvarchar(128), 
  @SearchStr2 nvarchar(110) 
SET 
  @TableName = &apos;&apos; 
SET 
  @SearchStr2 = QUOTENAME(&apos;%&apos; + @SearchStr + &apos;%&apos;, &apos;&apos;&apos;&apos;) 
WHILE @TableName IS NOT NULL 
BEGIN 
SET 
  @ColumnName = &apos;&apos; 
SET 
  @TableName = (
    SELECT 
      MIN(
        QUOTENAME(TABLE_SCHEMA) + &apos;.&apos; + QUOTENAME(TABLE_NAME)
      ) 
    FROM 
      INFORMATION_SCHEMA.TABLES 
    WHERE 
      TABLE_TYPE = &apos;BASE TABLE&apos; 
      AND QUOTENAME(TABLE_SCHEMA) + &apos;.&apos; + QUOTENAME(TABLE_NAME) &gt; @TableName 
      AND OBJECTPROPERTY(
        OBJECT_ID(
          QUOTENAME(TABLE_SCHEMA) + &apos;.&apos; + QUOTENAME(TABLE_NAME)
        ), 
        &apos;IsMSShipped&apos;
      ) = 0
  ) 
WHILE (@TableName IS NOT NULL) 
  AND (@ColumnName IS NOT NULL) 
BEGIN 
SET 
  @ColumnName = (
    SELECT 
      MIN(
        QUOTENAME(COLUMN_NAME)
      ) 
    FROM 
      INFORMATION_SCHEMA.COLUMNS 
    WHERE 
      TABLE_SCHEMA = PARSENAME(@TableName, 2) 
      AND TABLE_NAME = PARSENAME(@TableName, 1) 
      AND DATA_TYPE IN (
        &apos;char&apos;, &apos;varchar&apos;, &apos;nchar&apos;, &apos;nvarchar&apos;
      ) 
      AND QUOTENAME(COLUMN_NAME) &gt; @ColumnName
  ) IF @ColumnName IS NOT NULL BEGIN INSERT INTO @Results EXEC (
    &apos;SELECT &apos;&apos;&apos; + @TableName + &apos;.&apos; + @ColumnName + &apos;&apos;&apos;, LEFT(&apos; + @ColumnName + &apos;, 3630) 
                FROM &apos; + @TableName + &apos; (NOLOCK) &apos; + &apos; WHERE &apos; + @ColumnName + &apos; LIKE &apos; + @SearchStr2
  ) END END END 
  --SELECT ColumnName, ColumnValue FROM @Results
  --UNION 
  --DECLARE @Search varchar(255)
  --SET @Search=&apos;quizplay&apos;
INSERT INTO @Results(ColumnName, ColumnValue) 
SELECT 
  DISTINCT o.name AS ColumnName, 
  o.type_desc as ColumnValue 
FROM 
  sys.sql_modules m 
  INNER JOIN sys.objects o ON m.object_id = o.object_id 
WHERE 
  m.definition Like &apos;%&apos; + @SearchStr + &apos;%&apos; --ORDER BY 2,1
SELECT 
  ColumnName, 
  ColumnValue 
FROM 
  @Results 
 END
</code></pre><blockquote>Note: for a big database this take time and not supposed to run in production environment as may lead to lock of operations. &#xA0;</blockquote>]]></content:encoded></item><item><title><![CDATA[Fastest Way to Get the Number of  Rows in a Table (SQL SERVER)]]></title><description><![CDATA[<p>Often we may need to check the number of rows in a `table`. If the table size is not big enough, then need not to worry much about that. But what if your table has more than `10M rows`? You will find the first two approach described in below is</p>]]></description><link>https://nhossaincse.com/fastest-way-to-get-the-number-of-rows-in-a-table-sql-server/</link><guid isPermaLink="false">62db972ec8533a0001cc4871</guid><category><![CDATA[SQL]]></category><dc:creator><![CDATA[Md Nazmul Hossain]]></dc:creator><pubDate>Tue, 23 Nov 2021 08:25:00 GMT</pubDate><media:content url="https://nhossaincse.com/content/images/2022/07/microsoft-sql-server595x3350_0.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://nhossaincse.com/content/images/2022/07/microsoft-sql-server595x3350_0.jpg" alt="Fastest Way to Get the Number of  Rows in a Table (SQL SERVER)"><p>Often we may need to check the number of rows in a `table`. If the table size is not big enough, then need not to worry much about that. But what if your table has more than `10M rows`? You will find the first two approach described in below is very slow and may lead to lock the table operation.</p><!--kg-card-begin: markdown--><h3 id="approach-1">Approach 1:</h3>
<!--kg-card-end: markdown--><p>The first way to check will come in head is with <code>count(*)</code><strong> </strong>like below snippet, right? </p><pre><code class="language-SQL">select count(*) from [tableName];</code></pre><!--kg-card-begin: markdown--><h3 id="approach-2">Approach 2:</h3>
<!--kg-card-end: markdown--><p>This is also costly and unnecessarily selects all the columns:</p><pre><code class="language-SQL">select count(1) from [tableName];</code></pre><!--kg-card-begin: markdown--><h3 id="approach-3">Approach 3:</h3>
<!--kg-card-end: markdown--><p>But the fastest one is the below one, as it takes the advantage from the direct index heap. </p><pre><code class="language-SQL">select sum([rows])
from sys.partitions
where object_id=object_id(&apos;tablename&apos;)
and index_id in (0,1);</code></pre><blockquote>Note: please keep in mind that, the fastest way to get the number of rows in a table will give you an <code>approximate</code><strong> </strong>row count. </blockquote>]]></content:encoded></item></channel></rss>