how nepal government violates human rights in nepal: सूचना प्रबिधि संचार नेपाली क्रान्ति लव काफ्ले १७ वर्ष इतिहास : 17 years of software development history by Lava Kafle Data AI ML NLP Automation Consultant at GrowByData Mr. Lava Kafle. Help Creating High Quality Software Development process, product, technologies using latest knowledge base,estimate cost efforts, Researching into healthcare and other domain prediction capabilities to save rising costs , Verification and Validation (V&V) & Testing. For more details: https://www.linkedin.com/in/lavakafle/ https://www.youtube.com/watch?v=FyYClQQ8GpA&feature=share


how nepal government violates human rights in nepal: सूचना प्रबिधि संचार नेपाली क्रान्ति लव काफ्ले १७ वर्ष इतिहास : 17 years of software development history by Lava Kafle Data AI ML NLP Automation Consultant at GrowByData Mr. Lava Kafle. Help Creating High Quality Software Development process, product, technologies using latest knowledge base,estimate cost efforts, Researching into healthcare and other domain prediction capabilities to save rising costs , Verification and Validation (V&V) & Testing. For more details: https://www.linkedin.com/in/lavakafle/ https://www.youtube.com/watch?v=FyYClQQ8GpA&feature=share

Advertisements

Only #1 company in world to directly hire #PHD @GrowBydata #Ecommerce


Only #1 company in world to directly hire #PHD @GrowBydata #Ecommerce

  1. TITLE: Senior Data Engineer

REQUIREMENTS

  • Proven knowledge of Data Warehousing, Data Integration techniques, source control, build, code integration an testing techniques.
  • Minimum 3 years of experience in SQL, SQL Server 2012+, ETL, etc.
  • Experience with Amazon Redshift and Talend ETL would be a plus point.
  • Excellent Database Internals and SQL optimization skills.
  • Strong hold of software delivery process, within an Agile environment.
  • Knowledge/Experience with Business Intelligence tools (Jaspersoft, Pentaho, Tableau etc.) is preferred.
  • Knowledge of Big Data is preferred.
  • Open to adapt to new technologies, tools and processes such as continuous testing, integration and deployments.
  • Ability to analyze business requirements and translate into solution development.
  • Proactive with very strong analytical and technical troubleshooting skills.
  • Excellent written, verbal, interpersonal, and follow up communication skills.
  • Experience with Big Data Technologies such as Hadoop, MongoDB etc. a plus.

RESPONSIBILITIES

  • Gather and process raw data at scale, including writing scripts, web scraping, calling APIs, etc.
  • Process both structured and unstructured data in a way that is useful for analysis.
  • Work on data modelling, design and development of DW.
  • Increase the capabilities of ETL layer of data operation for extracting data from heterogeneous data sources, transforming data as per the business requirement and load enterprise DW.
  • Should provide support in all phases of SDLC and ensure to deliver high-quality products.
  • Will be required to model and develop database artifacts such as tables, views, store procedures, triggers etc.
  • Work closely with our data team to integrate your amazing innovations and algorithms to our production systems.
  • Support business decision with ad-hoc data analysis as needed.
  • Collaborate with data science research team on creating and evolving data formats to be flexible for scalable technology.
  • Additional responsibilities include creating and implementing security policies, configuring and maintaining database replication including clusters, conducting daily backups, importing/exporting data to other systems, reporting, monitoring, troubleshoot.

Education: B.E – computer science or equivalent.

Experience: 3 plus years prior experience in a relevant field is a must. 5 plus years’ experience is preferable.

2.TITLE: Data Scientist

RESPONSIBILITIES

  • Develop/Plan required analytic projects in response to business needs
  • Contribute to data mining framework extensions and data analysis methodology fine tunings
  • Collaborate with inter-disciplinary team members of customers, analysts, data engineers, computer scientists to develop algorithms
  • Analyze large scale disparate data (structured, unstructured data, images) and develop insights based on a various modelling technique
  • Develop/Extend data match and prediction algorithms to forecast demand, supply, price, consumer behavior and other nuances in retail
  • Stay aware of industry leading computation techniques, algorithms and visualization tools
  • Use statistical techniques on datasets to measure results, show key trending patterns, identify impacts, model & forecast

REQUIREMENTS

  • Advanced software engineering with languages like python or C++ or equivalent
  • Data engineering with good knowledge of languages like SQL or SAS o SPSS or R
  • Deep statistics background verse in regression modelling and advanced techniques like SVM, neural networks, optimization, data matching and others
  • 2+ years of work experience involving quantitative data analysis to solve problems

EDUCATION AND TRAINING

  • Minimum Bachelor’s degree. Masters and PhD graduates are encouraged to apply
  • Academic background in Numerical Math, Physics, Statistics or Econometrics and Quantitative social sciences are also encouraged to apply
  • Computer science with experience in distributed computing, simulation, algorithms and machine learning is a plus

Getting up and running with Universal Connection Pool (via Martin’s Blog)


oracle UCP Universal Connection Pool Driver usage great article sample example

Oracle's next generation connection pooling solution, Universal Connection Pool, can be a bit tricky to set up. This is especially true when a JNDI data source is to be used-most example don't assume such a scenario. A lot of information is out there on the net, but no one seems to have given the full picture. During the research for chapter 11 of "Pro Oracle Database 11g RAC on Linux" I learned this the hard way. Since the book has been publishe … Read More

via Martin's Blog