Wednesday, 12 March 2025

portfolio page website

 

 Some important topics in portfolio webpage

Portfolio Sections:

️ Summary and About me:

Information yourself in brief.

️ Skills:

You can use this section to tell about your skills and special achievements.

️ Education:

Here you can tell your complete education details.

️ Work Experience:

If you have any work experience you can specify here.

️ Open Source Projects Connected with GitHub:

You can also mention your some mini projects here

️ Big Projects:

Any big projects you can use in this section.

️ Achievements And Certifications 🏆:

It is used to tell the information about any achievements and certifications you have.

️ Blogs:

Any information that is available in your own blogs.

️ Talks:

Special achievements like talks and public gathering speeches you can use this section for.

️ Podcast:

Any podcasts, you can mention here.

️ Contact me:

About your contact information.

️ Twitter Timeline:

Some twitter & social media profiles you can use here.

️ GitHub Profile:

You can give your github profile etc.. here.

Monday, 1 July 2024

LIFE INSURANCE & PLANS

 

Insurance is an essential financial tool that provides protection against various risks. 

Here are some key points about insurance: 
 Health Insurance: 
Health insurance covers medical expenses, hospitalization, and treatments. 
It offers cashless treatment at network hospitals. 
You can get tax benefits under Section 80D of the Income Tax Act. Compare health insurance plans from different insurers online.
Life Insurance: 
Life insurance provides financial security to your family in case of your demise. 
It pays a lump sum amount (sum assured) to the nominee. 
Types include term life insurance, whole life insurance, and endowment plans. 
Car and Bike Insurance: 
Motor insurance (car and bike) covers damages due to accidents, theft, or natural calamities. 
It is mandatory by law to have third-party liability insurance for vehicles. 
Comprehensive insurance covers own damage as well. 
Other Types of Insurance: 
Term Insurance: Provides a high sum assured at a low premium for a specific term. 
Child Investment Plan: Helps save for your child’s future education or marriage. 
Annuity Investment Plans: Offers regular income after retirement. 
ULIP (Unit Linked Insurance Plan): Combines insurance and investment. 
Travel Insurance: Covers medical emergencies and trip cancellations during travel.

LIFE INSURANCE TOPICS

Life Insurance 

A life insurance policy is a contract between the insurer and policyholder, wherein the insurer promises to pay a life cover in return for regular premiums paid by the insured.

What is a Life Insurance Policy & Its Meaning

A life insurance policy is an agreement between an insurance company and a policyholder, where the life insurer promises to pay a fixed amount of money in exchange for premiums paid periodically, after a set time period or upon the life insured’s death.

There are two simple types of life insurance policies:

Pure Protection plan:

It is Term Insurance Plan: Pure Protection plans, also called term insurance plans, are designed to protect your family's future by providing a lump sum payment in case of your untimely demise.

Savings Plan: 

A savings plan is a financial product that helps you plan long-term goals like buying a home, fees for children’s higher education, and more while providing life coverage benefits.

Life Cover For Family’s Protection

Build financial backup & secure family’s future by choosing a “Term insurance plan”. Suitable for someone who wishes to ensure adequate financial backup is available to the family in case of his/her untimely death.

·         High Life Cover at low premiums

·         Critical illness cover

·         Tax Benefit

·         Return of Premium


Life Cover With Wealth Creation

Be financially secure by choosing an “Investment plan” to meet financial goals like your child’s education or stable income source for post-retirement. Suitable for someone looking for long term wealth creation through market-linked or guaranteed return plans in addition to family’s protection through in-built life cover.

·         Long Term Wealth Growth

·         Guaranteed Payouts

·         Tax Benefit

·         Return of Premium

TERM INSURANCE:    

The purest and most affordable type of life insurance plan that offers financial coverage to the policyholder against the fixed amount of premiums for a specific duration. In case of the policyholder's untimely death, their nominee receives the Cover Amount, as per the chosen policy.

·         Term Return of Premium (TROP):

TROP(Term Return of Premium) is a variant of term insurance that provides an additional feature of Survival benefit. In addition to the life cover, if the policyholder survives the entire Policy Term, then all the premiums are paid back, excluding GST.

·         Whole life Insurance:

Under Whole Life Insurance, the policyholder is covered till the age of 100 years. If you want to leave a legacy for your family, and ensure that they are always financially covered, then Whole life Term Insurance is the best option for you.

INVESTMENT PLANS:

·         Market Linked Systematic Investment Plan (ULIP):

Unit linked investment plans (ULIPs) are unique market-linked life insurance plans that provide dual benefits of wealth creation through investments (in equity, debt or both) and a life insurance cover. High performing ULIPs have shown 15-20% returns (tax free), making it a popular choice for medium to long term investors.

·         Guaranteed Return Plan (Endowment Policy):

A guaranteed return plan or an endowment plan offers combined benefits of savings and insurance. It helps you save systematically on a regular basis and receive the maturity benefit on the survival of the policy term. These plans also offer death benefits on the death of the policyholder during the policy term.

·         Retirement Plans:

These are long-term investment plans which offer opportunities to get a stable post-retirement income. During the investment period, a premium amount is paid at regular intervals, which accumulates and grows. The maturity amount is then paid back post-retirement based on the preference in terms of lump sum or regular income.

·         Child Plan:

These plans are designed to enable financial security for children where the returns on the investment help fulfill a child's future needs like education. Child plans specifically ensure these remain intact even in your absence by providing life cover to the nominee & funding the balance premiums through the insurer, thus securing the financial future of the child


Tuesday, 2 April 2024

about Large Language Models info

 An Overview of Large Language Models 

Introduction

Large Language Models (LLMs) have emerged as a powerful force in the field of natural language processing (NLP). These neural networks, with billions of parameters, can understand and generate human-like text. In this blog post, we’ll explore the significance of LLMs and their implications for Google SEO.

Understanding Large Language Models

1.     What Are LLMs?

o    LLMs are foundation models trained on massive amounts of data, enabling them to perform a wide range of tasks related to natural language understanding and generation.

o    They can infer context, generate coherent responses, translate languages, summarize text, and even assist in creative writing or code generation.

2.     Transformers: A Breakthrough Technology

o    Transformers, a type of neural network architecture, revolutionized NLP.

o    Unlike traditional recurrent neural networks (RNNs), transformers process text in parallel, handling long-range dependencies efficiently.

3.     Aligning with Search Intent

o    Google ranks pages based on search intent alignment.

o    Analyze existing first-page results to understand the most common content types, formats, and angles.

o    Ensure your page aligns with what searchers are looking for.

4.     Comprehensive Content Coverage

o    LLMs need to cover all aspects of a topic.

o    Consider user expectations and provide relevant information.

o    For example, if you’re writing about sneakers, include size filters and activity-specific recommendations.

Leveraging LLMs for SEO

1.     Keyword Optimization

o    LLMs excel at grasping context and crafting content around specific keywords.

o    Optimize your page title, meta description, headings, alt-text, and URL structure.

2.     Natural Language Generation (NLG)

o    Use LLMs to create high-quality, engaging content.

o    Generate blog posts, product descriptions, or FAQs.

3.     Content Expansion and Diversification

o    LLMs can help you expand existing content.

o    Create related articles, FAQs, or guides to enhance your website.

4.     Topic Research and Ideation

o    Use LLMs to brainstorm new content ideas.

o    Explore trending topics and relevant keywords.

5.     Personalization at Scale

o    Tailor content to user preferences.

o    LLMs can generate personalized recommendations or responses.

Wednesday, 13 March 2024

over view of large language models

Introduction to Artificial Intelligence (AI):

Artificial Intelligence (AI) refers to a machine-based system that can, for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.

Since its inception, Artificial Intelligence has undergone numerous developments. In 1956, at a scientific conference at Dartmouth University, American computer scientist, John McCarthy, first coined the term “Artificial Intelligence”. During this conference, the audience reached a consensus that AI referred to the creation of machines with intelligence similar to that of humans.

AI development can be broadly categorized into three stages: ANI, AGI, and ASI. Artificial Narrow  Intelligence (ANI), also known as weak AI, refers to the development of computer systems that are designed to perform a specific task or solve a particular problem. Artificial General Intelligence (AGI),

 A happy-surprised reaction by John MacCarthy (the founder of AI), when he found out about AI for the first time.  Assume that this was 1956.  A person scouting a book that is made out of AI also known as strong AI or human-level AI, refers to the development of computer systems that can perform any intellectual task that a human can. Artificial Super Intelligence (ASI) refers to the development of computer systems that surpass human intelligence and can perform intellectual tasks that exceed human capacity.

There are various Artificial Intelligence technologies that are used in our daily lives. Some examples include “smart writing” features that offer suggestions for email composition, spam message classifiers, and voice assistant applications like Amazon’s Alexa or Microsoft’s Cortana, which utilize natural language processing.

Artificial intelligence applications possess the capability to continuously learn from new experiences and make deductions based on past experiences gathered from data. In so-doing, the machine is taught, how to execute specific tasks based on the knowledge it has acquired from such data.

 Generative AI isn’t new – so what’s changed:

Generative AI refers to a subset of Artificial Intelligence that involves training machines to generate new and original data, such as images, music, text, or even videos.

Unlike traditional AI, which operates on pre-existing data sets to recognize patterns and make predictions, generative AI can produce entirely new content by learning from existing data sets and generating something new based on that information.

This technology has various applications, such as in art and design, content creation, and even the development of chat-bots and virtual assistants.

Generative Adversarial Networks (GANs) are a type of deep learning model that consist of two neural networks: a generator and a discriminator. The generator creates new data instances that resemble the training data, while the discriminator evaluates whether the generated data is similar to the training data or not. During training, the generator tries to produce data that can fool the discriminator, while the discriminator tries to distinguish between the generated data and the training data.

A Variational Auto encoder (VAE) is a type of neural network architecture used for generative modeling that employs both an encoder and decoder network. The encoder network maps the input data into a latent space, while the decoder network maps the latent variables back into the original data space. By training the network to minimize the reconstruction error between the input and output data, the VAE can learn the underlying structure of the data distribution and generate new data samples from it.

Generative AI has revolutionized numerous industries by generating data for training machine learning models, producing top-notch images and videos, developing advertising texts, conducting awareness

campaigns, and scripting virtual assistant dialogs for customer service and chatting.

However, despite its unique capabilities, users must carefully consider the strengths and limitations of these cutting-edge applications and choose them judiciously based on the task at hand.

A thief trying to steal information and data. The image was created on Mid-journey.

You can copy the text below to get a similar result. Data privacy refers to the protection of an individual’s personal information or data, including sensitive information such as financial and health data, from unauthorized access, use, or disclosure. It involves controlling how data is collected, used, stored, shared, and disposed of by organizations or entities that collect or process that data.

Data privacy is an essential component of information security and is governed by various laws, regulations, and best practices aimed at ensuring the confidentiality, integrity, and availability of personal information.

Data privacy is necessary for several reasons. Firstly, it protects individuals’ personal information from being accessed, collected, and used without their knowledge or consent. Secondly, it ensures that sensitive information such as financial records, medical records, and confidential business information is kept safe and secure. Thirdly, it helps prevent identity theft and other forms of cybercrime. Additionally,

data privacy is important for maintaining trust between organizations and their customers, as well as for complying with legal and regulatory requirements.

Without proper data privacy measures in place, individuals’ personal and sensitive information can be compromised, leading to potential harm and loss.

Transparency is crucial to data privacy because it enables individuals to know how their data is collected, processed, and used by organizations. By being transparent, organizations can provide clear and concise information about their data privacy practices, policies, and procedures. This empowers individuals to make informed decisions about whether to share their personal information and to understand the potential consequences of doing so.

Chat-GPT is an Artificial Intelligence chatbot that is built to comprehend human language and produce text responses that closely mimic natural human language. It is capable of generating accurate and contextually appropriate responses to user input.

Recently developed by Open-AI, Chat-GPT is a state-of-the-art chatbot that has been trained in multiple languages using advanced deep learning techniques.

Its sophisticated technology enables it to comprehensively understand text and effectively respond to a wide range of inquiries. The software has numerous potential applications across a variety of industries, including:

Learning and Teaching: 

Chat-GPT has the potential to be a valuable resource for students and teachers alike, as it can assist with learning and teaching by providing information, answering questions, and addressing challenges related to the curriculum. It can serve as an ideal virtual assistant for educational

purposes.

Consulting and Technical Support: 

Chat-GPT can be utilized as an information source to offer advice and support in a wide range of technical areas such as IT, programming, engineering, and other related fields. Its ability to comprehend technical jargon and provide contextually relevant responses makes it an ideal virtual assistant for such applications.

Translation:

Chat-GPT has the capability to enhance communication by accurately translating text between different languages, thereby facilitating cross-lingual communication.

Time Planning and Task Management: 

Chat-GPT has the potential to improve personal and professional productivity by assisting with daily organization, task tracking, and priority setting. It can serve as a virtual assistant to help manage agendas, track tasks, and optimize time management, leading to increased productivity.

Marketing and Advertising: 

Marketing and advertising experts can utilize Chat-GPT to craft appealing ad scripts and produce successful marketing content.

The applications of Chat-GPT technology are extensive and diverse, encompassing various fields and industries.

 How are Large Language Models created:

A large language model is a computer program that learns and generates human-like language using a transformer architecture trained on vast text data.

Large Language Models (LLMs) are foundational machine learning models that use deep learning algorithms to process and understand natural language. These models are trained on massive amounts of text data to learn patterns and entity relationships in the language. LLMs can perform many types of language tasks, such as translating languages, analyzing sentiments, chat-bot conversations, and more. They can understand complex textual data, identify entities and relationships between them, and generate new text that is coherent and grammatically accurate.

Generative AI applications may produce imperfect outputs due to inaccuracies, outdated data, inherent biases, or malicious intentions. This could result in the generation of incomplete, false, or biased

content. Additionally, detecting subtle biases in the outputs can pose further challenges.

Generative AI applications rely solely on self-learning to generate text. So while they may produce text that is grammatically correct, it may not always be factually accurate, or can be misleading. Therefore,

human supervision is crucial to ensuring the accuracy of the text produced by generative AI.

A report from IDC suggests that the amount of data created globally is projected to reach 175 zettabytes by 2025, a significant increase from 33 zetta bytes in 2018.

This growth in data volume is driving the development of advanced AI models that can generate more comprehensive and realistic content.

The progress in machine learning and deep learning algorithms has enabled the training of generative AI models with vast amounts of data. Open-AI’s 3-GPT language model is a significant example, as it is trained on a massive collection of text data that exceeds 570 GB, making it one of the largest and

most resilient language models available today.

 How good can a LLM become to Training Large Language Models:

From data collection and preprocessing to model configuration and fine-tuning, let us explore the essential stages of LLM training.

Whether you are an aspiring researcher or a developer seeking to harness the power of LLMs, this tutorial will provide a step-by-step guide to train your language model.

Data collection and datasets: 

LLM training involves gathering a diverse and extensive range of data from various sources. This includes text documents, websites, books, articles, and other relevant textual resources. The data collection process aims to compile a comprehensive and representative dataset that covers different domains and topics.

High-quality and diverse datasets are essential to training LLMs effectively, as they enable the model to learn patterns, relationships, and linguistic nuances to generate coherent and contextually appropriate responses. Data preprocessing techniques, such as cleaning, formatting, and tokenization, are typically applied to ensure the data is in a suitable format for training the LLM.

Model configuration:

This stage involves defining the architecture and parameters of the model. This includes selecting the appropriate model architecture, such as transformer-based architectures like GPT or BERT, and determining the model size, number of layers, and attention mechanisms.

The model configuration step aims to optimize the model's architecture and parameters to achieve the desired performance and efficiency. Additionally, hyper parameters such as learning rate, batch size, and regularization techniques are set during this stage. Experimentation and fine-tuning of these configurations are often carried out to find the optimal balance between model complexity and computational requirements. The chosen model configuration significantly influences the LLM's ability to learn and generate high-quality outputs during subsequent training phases.

Model training:

Like any other deep learning model, the curated dataset is fed into the configured LLM, and its parameters are iteratively updated to improve performance. During training, the model learns to predict the next word or sequence of words based on the input it receives. This process involves forward and backward propagation of gradients to adjust the model's weights, leveraging optimization techniques like stochastic gradient descent.

Training is typically performed on powerful hardware infrastructure for extended periods to ensure convergence and capture the patterns and relationships present in the data. The model training stage is crucial for refining the LLM's language understanding and generation capabilities.

 Fine-tuning of Large Language Models:

This stage involves customizing a pre-trained LLM on a specific task or domain by training it on a smaller, task-specific dataset. This process enhances the model's performance and adaptability to the target task. Fine-tuning typically involves training the LLM with a lower learning rate and fewer training steps to prevent over-fitting.

By exposing the LLM to domain-specific data, it learns to generate more accurate and relevant responses. Fine-tuning allows LLMs to be applied to various specialized tasks while leveraging the general language understanding and generation capabilities acquired during pre-training.

Future of Large Language Models:

Although Chat-GPT is revolutionary, I would not recommend asking it for medical advice. But how long before AI can help us achieve even that? How much more accurate do LLMs need to be? These are the questions that researchers are trying to answer right now.

As large language models (LLMs) continue to advance, the future holds immense possibilities for their development and application. Improved contextual understanding, enhanced reasoning abilities, and reduced biases are some of the key areas of focus for the ongoing research and innovation of future LLMs.

 Unexpected effects of scaling up LLMs:

While LLMs have achieved remarkable milestones, it is crucial to acknowledge their limitations, boundaries, and potential risks. Understanding these limitations empowers us to make informed decisions about the responsible deployment of LLMs, facilitating the development of AI that aligns with ethical standards. We will explore constraints such as context windows, issues of bias, accuracy, and outdated training data that impact LLMs' performance and usability. We will also explore data risks and ethical considerations associated with their use.

 Additionally, efforts are being made to address LLMs' ethical implications and challenges, such as data privacy, fairness, and transparency. Collaborative efforts between researchers, developers, and policymakers will shape the future of LLMs, ensuring their responsible and beneficial integration into various domains, including healthcare, education, customer service, and creative content generation.

What are the common applications of generative AI:

With generative AI, You can apply generative AI across all lines of business including engineering, marketing, customer service, finance, and sales. Code generation is one of the most promising applications for generative AI.

There are many applications where you can put generative AI to work to achieve a step change in customer experience, employee productivity, business efficiency, and creativity. You can use generative AI to improve customer experience through capabilities such as chat-bots, virtual assistants, intelligent contact centers, personalization, and content moderation. You can boost your employees’ productivity with generative AI powered conversational search, content creation, and text summarization among others. You can improve business operations with intelligent document processing, maintenance assistants, quality control and visual inspection, and synthetic training data generation. Finally, you can use generative AI to turbo-charge production of all types of creative content from art and music with text, animation, video and image generation.


 


Wednesday, 20 December 2023

ABOUT LLMS TECHNOLOGY

 An LLM is a machine-learning neuro network trained through data input/output sets; frequently, the text is unlabeled or uncategorized, and the model is using self supervised or semi-supervised learning methodology. Information is ingested, or content entered, into the LLM, and the output is what that algorithm predicts the next word will be. The input can be proprietary corporate data or, as in the case of ChatGPT, whatever data it’s fed and scraped directly from the internet.

Training LLMs: 

to use the right data requires the use of massive, expensive server farms that act as supercomputers.

LLMs:

Those are controlled by parameters, as in millions, billions, and even trillions of them. if the information an LLM has ingested. is biased, incomplete, or otherwise undesirable, then the response it gives could be equally unreliable, bizarre, or even offensive. When a response goes off the rails, data analysts refer to it as “hallucinations,” because they can be so far off track. 

"ChatGPT for biases that are implicit — that is, the gender of the person is not obviously mentioned, but only included as information about their pronouns,” Kapoor said. “That is, if we replace “she” in the sentence with “he," ChatGPT would be three times less likely to make an error.”

ABOUT LARGE LANGUAGE MODEL(LLMS)

If you need to write down an email or chat thread into a concise summary with a chatbot such as OpenAI’s ChatGPT or Google’s Bard, gemini AI, can do that. If you need to spruce up your resume with more eloquent language and impressive with bullet points, AI can help this.

Want some ideas for a new marketing or ad campaign? Generative AI to the rescue. ChatGPT stands for chatbot generative pre-trained transformer. The chatbot’s foundation is the GPT large language model (LLM), a computer algorithm that processes natural language inputs and predicts the next word based on what it’s already seen. Then it predicts the next word, and the next word, and so on until its answer is complete.

Open-source LLMs, in particular, are gaining traction, enabling a cadre of developers to create more customizable models at a lower cost. 

LLMs are a type of AI that are currently trained on a massive trove of articles, like Wikipedia entries, books, internet-based resources and other input to produce human-like responses to natural language queries. That's an immense amount of data. But LLMs are poised to shrink, not grow, as vendors seek to customize them for specific uses that don’t need the massive data sets used by today’s most popular  models.

Saturday, 16 December 2023

ABOUT PROGRAMING LANGUAGES

  A programming language is a way to code the program for programmers (developers) to communicate with computers. Programming languages consist of a set of rules that allows string values to be converted into various ways of generating machine code, or, in the case of visual programming languages, graphical elements.

Generally, a program is a set of instructions written in a particular language (C, C++, Java, Python) to achieve a particular task.

The synergy of programming languages used to give power for digital landscapes.

From Python to C++, equip yourself with the tools to detect vulnerabilities and fortify systems against cyber threats. Join us in building a solid foundation in both programming and cyber security to stand strong in the ever-evolving digital world.

 

Major Types of Programing Languages

 Procedural Programming Languages: 

A programming paradigm that uses procedures or functions to abbreviate and categorize the code into reusable blocks is a procedural programming language. C, Pascal and FORTRAN are the most in-demand programming languages supporting this paradigm. 

In this programming, the program is divided into functions or procedures. They are primarily insular sub-programs that perform a specific task. These procedures can be called from other program parts, allowing for flexible programming and code reuse. The priority is the systematic execution of a program. Further, it emphasizes the series of instructions and influences data stored in variables.

These are the best programming languages to learn as they are widely used in the fields like engineering, gaming and finance. Procedural programming can be less flexible though this does not stop it from being one of the most significant programming paradigms taught in high-ranking computer science courses.  

 Functional ProgrammingLanguages: 

Unlike procedural languages, functional programming languages are more flexible. It is composed of a series of functions. This programming paradigm stresses the use of model computations and data transformation. Haskell, Clojure, Lisp and Scala are the languages supporting this paradigm. 

Functional programming makes programs easier to reason with and increases their reliability. The functions solely operate on their input arguments. They are less popular, but they have experienced a colossal boom from the educational point of view. They are assigned to variables, passed as arguments to other functions and return results from other functions. 

Functional programming is efficient parallel programming. They have no mutable state. You can program functions and parallel work as instructions. These codes support nested functions and consist of independent units that run coherently. Hence, this is more efficient.

 Object-oriented Programming Languages 

In an object-oriented programming language, objects define the data and the behavior of objects. These objects typically include data attributes representing the object's state and method. This language enables users to make a complex system with interconnected objects.

This language hides implementation components from the outside world through encapsulation. This makes it possible to build large intricate systems without stressing about the internal workings of respective objects. The other benefit that makes this language so in demand is the feature of inheritance. It creates a ranking of classes that share common features while still allowing customization. 

Some popular object-oriented programming languages are Java, Python, C++, and Ruby. All these languages are top programming languages, but they share the principle of being object-oriented. 

 Scripting Languages: 

Simple to learn with easy syntax and dynamic typing, the scripting language is the type of language that is interpreted rather than compiled. The two types of this language are server-side scripting languages and client-side scripting languages. These languages make communication possible with other programming languages.

·         Python - The easiest programming language used among developers is Python. It is an object-oriented programming language. The language has a high-Level data structure, and built-in libraries, that make it easy to use and suitable for rapid application development. It is easy, decoded and has a dynamic semantic language. 

·         Perl - The language is dynamic with innovative features that make it popular and different from what is available on Linux and Windows Server. Websites with high traffic usually use Perl, including IMDB, as it helps in text manipulation tasks. 

·         Bash-Bourne shell programming:-

Again, Shell is a scripting language that is the default command interpreter on most Linux/GNU operating systems. This language is easier than most of the other programming languages. Bash makes it easier to create script store documentation for others and provides useful reusable scripts.

 Logical Programing Language:

As the name suggests, this is computer programming based on formal logic. This programming language program consists of a cluster of logical statements or rules that determine relationships among objects. It allows the system to extrapolate new information. 

Artificial intelligence and expert systems commonly use this language where reasoning and conjecture are required. This language allows a concise and expressive program which is easier to reason about and maintain than programs that return to other paradigms. 

To summarize, logical programming is a secure and flexible approach to solving problems in computer programming. It is suitable for every type of problem. It is a valuable tool for a few applications as well. One of the most popular logical languages is Prolog which consists of a set of facts and rules to describe a problem and reason about it. 

Imperative Programming: 

In imperative programming, the programmer provides a set of instructions that the computer follows to manipulate the state of the program and the information structure within it. This paradigm describes the steps that a computer needs to take to solve a program rather than defining the mathematical function. C, C++, Java, and Python are some of the imperative programming languages.

The Imperative is the most popular programming language in software development for system programming and low-level programming tasks, which includes direct level control over hardware resources.

About GITHUB Version Control


About Version Control with github:

Version Control called as “version control” software repository.

About Repositories:

When you start a new project, you should make a folder to contain just the stuff for that project. 

When you want to back your work up on another computer, there are websites that specialize in git. The most popular is GitHub, acquired by Microsoft in 2018. In these notes, we’ll teach you how to use GitHub and assume that’s where you’re publishing your work.

If you want git to start tracking a folder and keeping snapshots, to enable the features listed above, you have to turn the folder into what is called a git repository, or for short, a repo.

By default, a folder on your computer is not tracked by git

about Tracking changes in the repository:

As you work on the project, inevitably you have ups and downs. May be it goes like this:

You start by downloading a dataset from the instructor and starting a new blank Python script or Jupyter notebook in your repo folder. 

Everything’s fine so far. You try to load the dataset but keep getting errors. 

A friend at dinner reminded you about setting the text encoding, and that fixed the problem. 

You get the dataset loading before bed. You get the data cleaned without a problem. 

During class, the instructor asks your team to make progress on a hypothesis test, but you run out of time in class before you can figure out all the details. The last few lines of code still give errors. 

Sharing online:

The git term for a site on which you back up or publish a repository is called a remote. This is in contrast to the repo folder on your computer, which is called your local copy.

There are three important terms to know regarding dealing with remotes in git; I’ll phrase each of them in terms of using GitHub, but the same terms apply to any remote:

  • For repositories you created:

    • Sending my most recent commits to GitHub is called pushing my changes (that is, my commits).

  • For repositories someone else created:

    • Getting a copy of a repository is called cloning the repository. It’s not the same as downloading. A download contains just the latest version; a clone contains all past snapshots, too.

    • If the original author updates the repository with new content and I want to update my clone, that’s called pulling the changes (opposite of push, obviously).

Although technically it’s possible to pull and push to the same repository.

ads

portfolio page website

   Some important topics in portfolio webpage Portfolio Sections: ✔ ️ Summary and About me: Information yourself in brief. ✔ ️ Skil...